00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 424 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3086 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.056 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.056 The recommended git tool is: git 00:00:00.057 using credential 00000000-0000-0000-0000-000000000002 00:00:00.059 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.127 Fetching changes from the remote Git repository 00:00:00.129 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.202 Using shallow fetch with depth 1 00:00:00.202 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.202 > git --version # timeout=10 00:00:00.257 > git --version # 'git version 2.39.2' 00:00:00.257 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.258 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.258 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.350 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.361 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.372 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:06.372 > git config core.sparsecheckout # timeout=10 00:00:06.383 > git read-tree -mu HEAD # timeout=10 00:00:06.398 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:06.418 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:06.418 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:06.522 [Pipeline] Start of Pipeline 00:00:06.536 [Pipeline] library 00:00:06.537 Loading library shm_lib@master 00:00:06.537 Library shm_lib@master is cached. Copying from home. 00:00:06.555 [Pipeline] node 00:00:21.557 Still waiting to schedule task 00:00:21.557 Waiting for next available executor on ‘vagrant-vm-host’ 00:16:01.470 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:16:01.472 [Pipeline] { 00:16:01.488 [Pipeline] catchError 00:16:01.489 [Pipeline] { 00:16:01.506 [Pipeline] wrap 00:16:01.517 [Pipeline] { 00:16:01.526 [Pipeline] stage 00:16:01.530 [Pipeline] { (Prologue) 00:16:01.554 [Pipeline] echo 00:16:01.555 Node: VM-host-SM17 00:16:01.561 [Pipeline] cleanWs 00:16:01.573 [WS-CLEANUP] Deleting project workspace... 00:16:01.573 [WS-CLEANUP] Deferred wipeout is used... 00:16:01.581 [WS-CLEANUP] done 00:16:01.782 [Pipeline] setCustomBuildProperty 00:16:01.859 [Pipeline] nodesByLabel 00:16:01.860 Found a total of 1 nodes with the 'sorcerer' label 00:16:01.870 [Pipeline] httpRequest 00:16:01.874 HttpMethod: GET 00:16:01.875 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:16:01.875 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:16:01.876 Response Code: HTTP/1.1 200 OK 00:16:01.876 Success: Status code 200 is in the accepted range: 200,404 00:16:01.877 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:16:02.018 [Pipeline] sh 00:16:02.299 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:16:02.320 [Pipeline] httpRequest 00:16:02.324 HttpMethod: GET 00:16:02.325 URL: http://10.211.164.101/packages/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:16:02.326 Sending request to url: http://10.211.164.101/packages/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:16:02.326 Response Code: HTTP/1.1 200 OK 00:16:02.327 Success: Status code 200 is in the accepted range: 200,404 00:16:02.327 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:16:04.455 [Pipeline] sh 00:16:04.730 + tar --no-same-owner -xf spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:16:08.025 [Pipeline] sh 00:16:08.372 + git -C spdk log --oneline -n5 00:16:08.372 4506c0c36 test/common: Enable inherit_errexit 00:16:08.372 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:16:08.372 7b52e4c17 test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:16:08.372 1dc065205 test/scheduler: Calculate median of the cpu load samples 00:16:08.372 b22f1b34d test/scheduler: Enhance lookup of the $old_cgroup in move_proc() 00:16:08.393 [Pipeline] withCredentials 00:16:08.401 > git --version # timeout=10 00:16:08.414 > git --version # 'git version 2.39.2' 00:16:08.427 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:16:08.429 [Pipeline] { 00:16:08.437 [Pipeline] retry 00:16:08.439 [Pipeline] { 00:16:08.456 [Pipeline] sh 00:16:08.734 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:16:08.746 [Pipeline] } 00:16:08.769 [Pipeline] // retry 00:16:08.775 [Pipeline] } 00:16:08.797 [Pipeline] // withCredentials 00:16:08.810 [Pipeline] httpRequest 00:16:08.814 HttpMethod: GET 00:16:08.815 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:16:08.816 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:16:08.818 Response Code: HTTP/1.1 200 OK 00:16:08.818 Success: Status code 200 is in the accepted range: 200,404 00:16:08.819 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:16:10.055 [Pipeline] sh 00:16:10.335 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:16:12.245 [Pipeline] sh 00:16:12.519 + git -C dpdk log --oneline -n5 00:16:12.519 eeb0605f11 version: 23.11.0 00:16:12.519 238778122a doc: update release notes for 23.11 00:16:12.519 46aa6b3cfc doc: fix description of RSS features 00:16:12.519 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:16:12.519 7e421ae345 devtools: support skipping forbid rule check 00:16:12.539 [Pipeline] writeFile 00:16:12.557 [Pipeline] sh 00:16:12.835 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:16:12.847 [Pipeline] sh 00:16:13.125 + cat autorun-spdk.conf 00:16:13.125 SPDK_RUN_FUNCTIONAL_TEST=1 00:16:13.125 SPDK_TEST_NVMF=1 00:16:13.125 SPDK_TEST_NVMF_TRANSPORT=tcp 00:16:13.125 SPDK_TEST_USDT=1 00:16:13.125 SPDK_RUN_UBSAN=1 00:16:13.125 SPDK_TEST_NVMF_MDNS=1 00:16:13.125 NET_TYPE=virt 00:16:13.125 SPDK_JSONRPC_GO_CLIENT=1 00:16:13.125 SPDK_TEST_NATIVE_DPDK=v23.11 00:16:13.125 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:16:13.125 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:16:13.131 RUN_NIGHTLY=1 00:16:13.134 [Pipeline] } 00:16:13.152 [Pipeline] // stage 00:16:13.169 [Pipeline] stage 00:16:13.172 [Pipeline] { (Run VM) 00:16:13.188 [Pipeline] sh 00:16:13.469 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:16:13.469 + echo 'Start stage prepare_nvme.sh' 00:16:13.469 Start stage prepare_nvme.sh 00:16:13.469 + [[ -n 0 ]] 00:16:13.469 + disk_prefix=ex0 00:16:13.469 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:16:13.469 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:16:13.469 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:16:13.469 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:16:13.469 ++ SPDK_TEST_NVMF=1 00:16:13.469 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:16:13.469 ++ SPDK_TEST_USDT=1 00:16:13.469 ++ SPDK_RUN_UBSAN=1 00:16:13.469 ++ SPDK_TEST_NVMF_MDNS=1 00:16:13.469 ++ NET_TYPE=virt 00:16:13.469 ++ SPDK_JSONRPC_GO_CLIENT=1 00:16:13.469 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:16:13.469 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:16:13.469 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:16:13.469 ++ RUN_NIGHTLY=1 00:16:13.469 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:16:13.469 + nvme_files=() 00:16:13.469 + declare -A nvme_files 00:16:13.469 + backend_dir=/var/lib/libvirt/images/backends 00:16:13.469 + nvme_files['nvme.img']=5G 00:16:13.469 + nvme_files['nvme-cmb.img']=5G 00:16:13.469 + nvme_files['nvme-multi0.img']=4G 00:16:13.469 + nvme_files['nvme-multi1.img']=4G 00:16:13.469 + nvme_files['nvme-multi2.img']=4G 00:16:13.469 + nvme_files['nvme-openstack.img']=8G 00:16:13.469 + nvme_files['nvme-zns.img']=5G 00:16:13.469 + (( SPDK_TEST_NVME_PMR == 1 )) 00:16:13.469 + (( SPDK_TEST_FTL == 1 )) 00:16:13.469 + (( SPDK_TEST_NVME_FDP == 1 )) 00:16:13.469 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:16:13.469 + for nvme in "${!nvme_files[@]}" 00:16:13.469 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:16:13.469 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:16:13.469 + for nvme in "${!nvme_files[@]}" 00:16:13.469 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:16:13.469 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:16:13.469 + for nvme in "${!nvme_files[@]}" 00:16:13.469 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:16:13.469 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:16:13.469 + for nvme in "${!nvme_files[@]}" 00:16:13.469 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:16:13.469 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:16:13.469 + for nvme in "${!nvme_files[@]}" 00:16:13.469 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:16:13.469 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:16:13.469 + for nvme in "${!nvme_files[@]}" 00:16:13.469 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:16:13.469 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:16:13.469 + for nvme in "${!nvme_files[@]}" 00:16:13.469 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:16:14.037 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:16:14.037 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:16:14.037 + echo 'End stage prepare_nvme.sh' 00:16:14.037 End stage prepare_nvme.sh 00:16:14.048 [Pipeline] sh 00:16:14.342 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:16:14.342 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:16:14.342 00:16:14.342 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:16:14.342 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:16:14.342 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:16:14.342 HELP=0 00:16:14.342 DRY_RUN=0 00:16:14.342 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:16:14.342 NVME_DISKS_TYPE=nvme,nvme, 00:16:14.342 NVME_AUTO_CREATE=0 00:16:14.342 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:16:14.342 NVME_CMB=,, 00:16:14.342 NVME_PMR=,, 00:16:14.342 NVME_ZNS=,, 00:16:14.342 NVME_MS=,, 00:16:14.342 NVME_FDP=,, 00:16:14.342 SPDK_VAGRANT_DISTRO=fedora38 00:16:14.342 SPDK_VAGRANT_VMCPU=10 00:16:14.342 SPDK_VAGRANT_VMRAM=12288 00:16:14.342 SPDK_VAGRANT_PROVIDER=libvirt 00:16:14.342 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:16:14.342 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:16:14.342 SPDK_OPENSTACK_NETWORK=0 00:16:14.342 VAGRANT_PACKAGE_BOX=0 00:16:14.342 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:16:14.342 FORCE_DISTRO=true 00:16:14.342 VAGRANT_BOX_VERSION= 00:16:14.342 EXTRA_VAGRANTFILES= 00:16:14.342 NIC_MODEL=e1000 00:16:14.342 00:16:14.342 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:16:14.342 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:16:17.625 Bringing machine 'default' up with 'libvirt' provider... 00:16:18.190 ==> default: Creating image (snapshot of base box volume). 00:16:18.190 ==> default: Creating domain with the following settings... 00:16:18.190 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1715733741_9160f6a33384247c1a5a 00:16:18.190 ==> default: -- Domain type: kvm 00:16:18.190 ==> default: -- Cpus: 10 00:16:18.191 ==> default: -- Feature: acpi 00:16:18.191 ==> default: -- Feature: apic 00:16:18.191 ==> default: -- Feature: pae 00:16:18.191 ==> default: -- Memory: 12288M 00:16:18.191 ==> default: -- Memory Backing: hugepages: 00:16:18.191 ==> default: -- Management MAC: 00:16:18.191 ==> default: -- Loader: 00:16:18.191 ==> default: -- Nvram: 00:16:18.191 ==> default: -- Base box: spdk/fedora38 00:16:18.191 ==> default: -- Storage pool: default 00:16:18.191 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1715733741_9160f6a33384247c1a5a.img (20G) 00:16:18.191 ==> default: -- Volume Cache: default 00:16:18.191 ==> default: -- Kernel: 00:16:18.191 ==> default: -- Initrd: 00:16:18.191 ==> default: -- Graphics Type: vnc 00:16:18.191 ==> default: -- Graphics Port: -1 00:16:18.191 ==> default: -- Graphics IP: 127.0.0.1 00:16:18.191 ==> default: -- Graphics Password: Not defined 00:16:18.191 ==> default: -- Video Type: cirrus 00:16:18.191 ==> default: -- Video VRAM: 9216 00:16:18.191 ==> default: -- Sound Type: 00:16:18.191 ==> default: -- Keymap: en-us 00:16:18.191 ==> default: -- TPM Path: 00:16:18.191 ==> default: -- INPUT: type=mouse, bus=ps2 00:16:18.191 ==> default: -- Command line args: 00:16:18.191 ==> default: -> value=-device, 00:16:18.191 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:16:18.191 ==> default: -> value=-drive, 00:16:18.191 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:16:18.191 ==> default: -> value=-device, 00:16:18.191 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:18.191 ==> default: -> value=-device, 00:16:18.191 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:16:18.191 ==> default: -> value=-drive, 00:16:18.191 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:16:18.191 ==> default: -> value=-device, 00:16:18.191 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:18.191 ==> default: -> value=-drive, 00:16:18.191 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:16:18.191 ==> default: -> value=-device, 00:16:18.191 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:18.191 ==> default: -> value=-drive, 00:16:18.191 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:16:18.191 ==> default: -> value=-device, 00:16:18.191 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:18.449 ==> default: Creating shared folders metadata... 00:16:18.449 ==> default: Starting domain. 00:16:20.347 ==> default: Waiting for domain to get an IP address... 00:16:42.264 ==> default: Waiting for SSH to become available... 00:16:42.264 ==> default: Configuring and enabling network interfaces... 00:16:45.546 default: SSH address: 192.168.121.119:22 00:16:45.546 default: SSH username: vagrant 00:16:45.546 default: SSH auth method: private key 00:16:47.445 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:16:54.040 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:17:00.604 ==> default: Mounting SSHFS shared folder... 00:17:01.982 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:17:01.982 ==> default: Checking Mount.. 00:17:02.917 ==> default: Folder Successfully Mounted! 00:17:02.917 ==> default: Running provisioner: file... 00:17:03.922 default: ~/.gitconfig => .gitconfig 00:17:04.181 00:17:04.181 SUCCESS! 00:17:04.181 00:17:04.181 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:17:04.181 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:17:04.181 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:17:04.181 00:17:04.191 [Pipeline] } 00:17:04.209 [Pipeline] // stage 00:17:04.220 [Pipeline] dir 00:17:04.220 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:17:04.222 [Pipeline] { 00:17:04.237 [Pipeline] catchError 00:17:04.239 [Pipeline] { 00:17:04.255 [Pipeline] sh 00:17:04.534 + vagrant ssh-config --host+ vagrant 00:17:04.534 sed -ne /^Host/,$p 00:17:04.534 + tee ssh_conf 00:17:07.821 Host vagrant 00:17:07.821 HostName 192.168.121.119 00:17:07.821 User vagrant 00:17:07.821 Port 22 00:17:07.821 UserKnownHostsFile /dev/null 00:17:07.821 StrictHostKeyChecking no 00:17:07.821 PasswordAuthentication no 00:17:07.821 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:17:07.821 IdentitiesOnly yes 00:17:07.821 LogLevel FATAL 00:17:07.821 ForwardAgent yes 00:17:07.821 ForwardX11 yes 00:17:07.821 00:17:07.836 [Pipeline] withEnv 00:17:07.838 [Pipeline] { 00:17:07.854 [Pipeline] sh 00:17:08.135 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:17:08.135 source /etc/os-release 00:17:08.135 [[ -e /image.version ]] && img=$(< /image.version) 00:17:08.135 # Minimal, systemd-like check. 00:17:08.135 if [[ -e /.dockerenv ]]; then 00:17:08.135 # Clear garbage from the node's name: 00:17:08.135 # agt-er_autotest_547-896 -> autotest_547-896 00:17:08.135 # $HOSTNAME is the actual container id 00:17:08.135 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:17:08.135 if mountpoint -q /etc/hostname; then 00:17:08.135 # We can assume this is a mount from a host where container is running, 00:17:08.135 # so fetch its hostname to easily identify the target swarm worker. 00:17:08.135 container="$(< /etc/hostname) ($agent)" 00:17:08.135 else 00:17:08.135 # Fallback 00:17:08.135 container=$agent 00:17:08.135 fi 00:17:08.135 fi 00:17:08.135 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:17:08.135 00:17:08.147 [Pipeline] } 00:17:08.172 [Pipeline] // withEnv 00:17:08.180 [Pipeline] setCustomBuildProperty 00:17:08.194 [Pipeline] stage 00:17:08.196 [Pipeline] { (Tests) 00:17:08.217 [Pipeline] sh 00:17:08.495 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:17:08.769 [Pipeline] timeout 00:17:08.769 Timeout set to expire in 40 min 00:17:08.771 [Pipeline] { 00:17:08.791 [Pipeline] sh 00:17:09.070 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:17:09.638 HEAD is now at 4506c0c36 test/common: Enable inherit_errexit 00:17:09.652 [Pipeline] sh 00:17:09.933 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:17:10.207 [Pipeline] sh 00:17:10.487 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:17:10.502 [Pipeline] sh 00:17:10.782 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:17:10.782 ++ readlink -f spdk_repo 00:17:10.782 + DIR_ROOT=/home/vagrant/spdk_repo 00:17:10.782 + [[ -n /home/vagrant/spdk_repo ]] 00:17:10.782 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:17:10.782 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:17:10.782 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:17:10.782 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:17:10.782 + [[ -d /home/vagrant/spdk_repo/output ]] 00:17:10.782 + cd /home/vagrant/spdk_repo 00:17:10.782 + source /etc/os-release 00:17:10.782 ++ NAME='Fedora Linux' 00:17:10.782 ++ VERSION='38 (Cloud Edition)' 00:17:10.782 ++ ID=fedora 00:17:10.782 ++ VERSION_ID=38 00:17:10.782 ++ VERSION_CODENAME= 00:17:10.782 ++ PLATFORM_ID=platform:f38 00:17:10.782 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:17:10.782 ++ ANSI_COLOR='0;38;2;60;110;180' 00:17:10.782 ++ LOGO=fedora-logo-icon 00:17:10.782 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:17:10.782 ++ HOME_URL=https://fedoraproject.org/ 00:17:10.782 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:17:10.782 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:17:10.782 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:17:10.782 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:17:10.782 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:17:10.782 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:17:10.782 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:17:10.782 ++ SUPPORT_END=2024-05-14 00:17:10.782 ++ VARIANT='Cloud Edition' 00:17:10.782 ++ VARIANT_ID=cloud 00:17:10.782 + uname -a 00:17:10.782 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:17:10.782 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:17:11.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:11.350 Hugepages 00:17:11.350 node hugesize free / total 00:17:11.350 node0 1048576kB 0 / 0 00:17:11.350 node0 2048kB 0 / 0 00:17:11.350 00:17:11.350 Type BDF Vendor Device NUMA Driver Device Block devices 00:17:11.350 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:17:11.350 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:17:11.350 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:17:11.350 + rm -f /tmp/spdk-ld-path 00:17:11.350 + source autorun-spdk.conf 00:17:11.350 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:17:11.350 ++ SPDK_TEST_NVMF=1 00:17:11.350 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:17:11.350 ++ SPDK_TEST_USDT=1 00:17:11.350 ++ SPDK_RUN_UBSAN=1 00:17:11.350 ++ SPDK_TEST_NVMF_MDNS=1 00:17:11.350 ++ NET_TYPE=virt 00:17:11.350 ++ SPDK_JSONRPC_GO_CLIENT=1 00:17:11.350 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:17:11.350 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:17:11.350 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:11.350 ++ RUN_NIGHTLY=1 00:17:11.350 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:17:11.350 + [[ -n '' ]] 00:17:11.350 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:17:11.609 + for M in /var/spdk/build-*-manifest.txt 00:17:11.609 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:17:11.609 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:17:11.609 + for M in /var/spdk/build-*-manifest.txt 00:17:11.609 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:17:11.609 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:17:11.609 ++ uname 00:17:11.609 + [[ Linux == \L\i\n\u\x ]] 00:17:11.609 + sudo dmesg -T 00:17:11.609 + sudo dmesg --clear 00:17:11.609 + dmesg_pid=5833 00:17:11.609 + sudo dmesg -Tw 00:17:11.609 + [[ Fedora Linux == FreeBSD ]] 00:17:11.609 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:11.609 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:11.609 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:17:11.609 + [[ -x /usr/src/fio-static/fio ]] 00:17:11.609 + export FIO_BIN=/usr/src/fio-static/fio 00:17:11.609 + FIO_BIN=/usr/src/fio-static/fio 00:17:11.609 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:17:11.609 + [[ ! -v VFIO_QEMU_BIN ]] 00:17:11.609 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:17:11.609 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:11.609 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:11.609 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:17:11.609 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:11.609 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:11.609 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:11.609 Test configuration: 00:17:11.609 SPDK_RUN_FUNCTIONAL_TEST=1 00:17:11.609 SPDK_TEST_NVMF=1 00:17:11.609 SPDK_TEST_NVMF_TRANSPORT=tcp 00:17:11.609 SPDK_TEST_USDT=1 00:17:11.609 SPDK_RUN_UBSAN=1 00:17:11.609 SPDK_TEST_NVMF_MDNS=1 00:17:11.609 NET_TYPE=virt 00:17:11.609 SPDK_JSONRPC_GO_CLIENT=1 00:17:11.609 SPDK_TEST_NATIVE_DPDK=v23.11 00:17:11.609 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:17:11.609 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:11.609 RUN_NIGHTLY=1 00:43:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:11.609 00:43:14 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:11.609 00:43:14 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.609 00:43:14 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.609 00:43:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.609 00:43:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.609 00:43:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.609 00:43:14 -- paths/export.sh@5 -- $ export PATH 00:17:11.609 00:43:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.609 00:43:14 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:11.609 00:43:14 -- common/autobuild_common.sh@437 -- $ date +%s 00:17:11.609 00:43:14 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715733794.XXXXXX 00:17:11.609 00:43:14 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715733794.N2s9xI 00:17:11.609 00:43:14 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:17:11.609 00:43:14 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:17:11.609 00:43:14 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:17:11.609 00:43:14 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:17:11.609 00:43:14 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:11.609 00:43:14 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:11.609 00:43:14 -- common/autobuild_common.sh@453 -- $ get_config_params 00:17:11.609 00:43:14 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:17:11.609 00:43:14 -- common/autotest_common.sh@10 -- $ set +x 00:17:11.609 00:43:14 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:17:11.869 00:43:14 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:17:11.869 00:43:14 -- pm/common@17 -- $ local monitor 00:17:11.869 00:43:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:11.869 00:43:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:11.869 00:43:14 -- pm/common@25 -- $ sleep 1 00:17:11.869 00:43:14 -- pm/common@21 -- $ date +%s 00:17:11.869 00:43:14 -- pm/common@21 -- $ date +%s 00:17:11.869 00:43:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715733794 00:17:11.869 00:43:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715733794 00:17:11.869 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715733794_collect-vmstat.pm.log 00:17:11.869 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715733794_collect-cpu-load.pm.log 00:17:12.807 00:43:15 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:17:12.807 00:43:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:17:12.807 00:43:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:17:12.807 00:43:15 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:17:12.807 00:43:15 -- spdk/autobuild.sh@16 -- $ date -u 00:17:12.807 Wed May 15 12:43:15 AM UTC 2024 00:17:12.807 00:43:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:17:12.807 v24.05-pre-658-g4506c0c36 00:17:12.807 00:43:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:17:12.807 00:43:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:17:12.807 00:43:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:17:12.807 00:43:15 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:17:12.807 00:43:15 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:17:12.807 00:43:15 -- common/autotest_common.sh@10 -- $ set +x 00:17:12.807 ************************************ 00:17:12.807 START TEST ubsan 00:17:12.807 ************************************ 00:17:12.807 using ubsan 00:17:12.807 00:43:15 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:17:12.807 00:17:12.807 real 0m0.000s 00:17:12.807 user 0m0.000s 00:17:12.807 sys 0m0.000s 00:17:12.807 00:43:15 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:17:12.807 00:43:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:17:12.807 ************************************ 00:17:12.807 END TEST ubsan 00:17:12.807 ************************************ 00:17:12.807 00:43:15 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:17:12.807 00:43:15 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:17:12.807 00:43:15 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:17:12.807 00:43:15 -- common/autotest_common.sh@1098 -- $ '[' 2 -le 1 ']' 00:17:12.807 00:43:15 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:17:12.807 00:43:15 -- common/autotest_common.sh@10 -- $ set +x 00:17:12.807 ************************************ 00:17:12.807 START TEST build_native_dpdk 00:17:12.807 ************************************ 00:17:12.807 00:43:15 build_native_dpdk -- common/autotest_common.sh@1122 -- $ _build_native_dpdk 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:17:12.807 00:43:15 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:17:12.807 eeb0605f11 version: 23.11.0 00:17:12.807 238778122a doc: update release notes for 23.11 00:17:12.807 46aa6b3cfc doc: fix description of RSS features 00:17:12.807 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:17:12.807 7e421ae345 devtools: support skipping forbid rule check 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:17:12.807 00:43:16 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:17:12.807 00:43:16 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:17:12.808 00:43:16 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.808 00:43:16 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:17:12.808 00:43:16 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:17:12.808 00:43:16 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:17:12.808 00:43:16 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:17:12.808 00:43:16 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:17:12.808 00:43:16 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:17:12.808 00:43:16 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:17:12.808 00:43:16 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:17:12.808 00:43:16 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:17:12.808 00:43:16 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:17:12.808 00:43:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:17:12.808 00:43:16 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:17:12.808 00:43:16 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:17:12.808 patching file config/rte_config.h 00:17:12.808 Hunk #1 succeeded at 60 (offset 1 line). 00:17:12.808 00:43:16 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:17:12.808 00:43:16 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:17:12.808 00:43:16 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:17:12.808 00:43:16 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:17:12.808 00:43:16 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:17:18.080 The Meson build system 00:17:18.080 Version: 1.3.1 00:17:18.080 Source dir: /home/vagrant/spdk_repo/dpdk 00:17:18.080 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:17:18.080 Build type: native build 00:17:18.080 Program cat found: YES (/usr/bin/cat) 00:17:18.080 Project name: DPDK 00:17:18.080 Project version: 23.11.0 00:17:18.080 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:17:18.080 C linker for the host machine: gcc ld.bfd 2.39-16 00:17:18.080 Host machine cpu family: x86_64 00:17:18.080 Host machine cpu: x86_64 00:17:18.080 Message: ## Building in Developer Mode ## 00:17:18.080 Program pkg-config found: YES (/usr/bin/pkg-config) 00:17:18.080 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:17:18.080 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:17:18.080 Program python3 found: YES (/usr/bin/python3) 00:17:18.080 Program cat found: YES (/usr/bin/cat) 00:17:18.080 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:17:18.080 Compiler for C supports arguments -march=native: YES 00:17:18.080 Checking for size of "void *" : 8 00:17:18.080 Checking for size of "void *" : 8 (cached) 00:17:18.080 Library m found: YES 00:17:18.080 Library numa found: YES 00:17:18.080 Has header "numaif.h" : YES 00:17:18.080 Library fdt found: NO 00:17:18.080 Library execinfo found: NO 00:17:18.080 Has header "execinfo.h" : YES 00:17:18.080 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:17:18.080 Run-time dependency libarchive found: NO (tried pkgconfig) 00:17:18.080 Run-time dependency libbsd found: NO (tried pkgconfig) 00:17:18.080 Run-time dependency jansson found: NO (tried pkgconfig) 00:17:18.080 Run-time dependency openssl found: YES 3.0.9 00:17:18.080 Run-time dependency libpcap found: YES 1.10.4 00:17:18.080 Has header "pcap.h" with dependency libpcap: YES 00:17:18.080 Compiler for C supports arguments -Wcast-qual: YES 00:17:18.080 Compiler for C supports arguments -Wdeprecated: YES 00:17:18.080 Compiler for C supports arguments -Wformat: YES 00:17:18.080 Compiler for C supports arguments -Wformat-nonliteral: NO 00:17:18.080 Compiler for C supports arguments -Wformat-security: NO 00:17:18.080 Compiler for C supports arguments -Wmissing-declarations: YES 00:17:18.080 Compiler for C supports arguments -Wmissing-prototypes: YES 00:17:18.080 Compiler for C supports arguments -Wnested-externs: YES 00:17:18.080 Compiler for C supports arguments -Wold-style-definition: YES 00:17:18.080 Compiler for C supports arguments -Wpointer-arith: YES 00:17:18.080 Compiler for C supports arguments -Wsign-compare: YES 00:17:18.080 Compiler for C supports arguments -Wstrict-prototypes: YES 00:17:18.080 Compiler for C supports arguments -Wundef: YES 00:17:18.080 Compiler for C supports arguments -Wwrite-strings: YES 00:17:18.080 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:17:18.080 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:17:18.080 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:17:18.080 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:17:18.080 Program objdump found: YES (/usr/bin/objdump) 00:17:18.080 Compiler for C supports arguments -mavx512f: YES 00:17:18.080 Checking if "AVX512 checking" compiles: YES 00:17:18.080 Fetching value of define "__SSE4_2__" : 1 00:17:18.080 Fetching value of define "__AES__" : 1 00:17:18.080 Fetching value of define "__AVX__" : 1 00:17:18.080 Fetching value of define "__AVX2__" : 1 00:17:18.080 Fetching value of define "__AVX512BW__" : (undefined) 00:17:18.080 Fetching value of define "__AVX512CD__" : (undefined) 00:17:18.080 Fetching value of define "__AVX512DQ__" : (undefined) 00:17:18.080 Fetching value of define "__AVX512F__" : (undefined) 00:17:18.080 Fetching value of define "__AVX512VL__" : (undefined) 00:17:18.080 Fetching value of define "__PCLMUL__" : 1 00:17:18.080 Fetching value of define "__RDRND__" : 1 00:17:18.080 Fetching value of define "__RDSEED__" : 1 00:17:18.080 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:17:18.080 Fetching value of define "__znver1__" : (undefined) 00:17:18.080 Fetching value of define "__znver2__" : (undefined) 00:17:18.080 Fetching value of define "__znver3__" : (undefined) 00:17:18.080 Fetching value of define "__znver4__" : (undefined) 00:17:18.080 Compiler for C supports arguments -Wno-format-truncation: YES 00:17:18.080 Message: lib/log: Defining dependency "log" 00:17:18.080 Message: lib/kvargs: Defining dependency "kvargs" 00:17:18.080 Message: lib/telemetry: Defining dependency "telemetry" 00:17:18.080 Checking for function "getentropy" : NO 00:17:18.080 Message: lib/eal: Defining dependency "eal" 00:17:18.080 Message: lib/ring: Defining dependency "ring" 00:17:18.081 Message: lib/rcu: Defining dependency "rcu" 00:17:18.081 Message: lib/mempool: Defining dependency "mempool" 00:17:18.081 Message: lib/mbuf: Defining dependency "mbuf" 00:17:18.081 Fetching value of define "__PCLMUL__" : 1 (cached) 00:17:18.081 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:17:18.081 Compiler for C supports arguments -mpclmul: YES 00:17:18.081 Compiler for C supports arguments -maes: YES 00:17:18.081 Compiler for C supports arguments -mavx512f: YES (cached) 00:17:18.081 Compiler for C supports arguments -mavx512bw: YES 00:17:18.081 Compiler for C supports arguments -mavx512dq: YES 00:17:18.081 Compiler for C supports arguments -mavx512vl: YES 00:17:18.081 Compiler for C supports arguments -mvpclmulqdq: YES 00:17:18.081 Compiler for C supports arguments -mavx2: YES 00:17:18.081 Compiler for C supports arguments -mavx: YES 00:17:18.081 Message: lib/net: Defining dependency "net" 00:17:18.081 Message: lib/meter: Defining dependency "meter" 00:17:18.081 Message: lib/ethdev: Defining dependency "ethdev" 00:17:18.081 Message: lib/pci: Defining dependency "pci" 00:17:18.081 Message: lib/cmdline: Defining dependency "cmdline" 00:17:18.081 Message: lib/metrics: Defining dependency "metrics" 00:17:18.081 Message: lib/hash: Defining dependency "hash" 00:17:18.081 Message: lib/timer: Defining dependency "timer" 00:17:18.081 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:17:18.081 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:17:18.081 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:17:18.081 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:17:18.081 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:17:18.081 Message: lib/acl: Defining dependency "acl" 00:17:18.081 Message: lib/bbdev: Defining dependency "bbdev" 00:17:18.081 Message: lib/bitratestats: Defining dependency "bitratestats" 00:17:18.081 Run-time dependency libelf found: YES 0.190 00:17:18.081 Message: lib/bpf: Defining dependency "bpf" 00:17:18.081 Message: lib/cfgfile: Defining dependency "cfgfile" 00:17:18.081 Message: lib/compressdev: Defining dependency "compressdev" 00:17:18.081 Message: lib/cryptodev: Defining dependency "cryptodev" 00:17:18.081 Message: lib/distributor: Defining dependency "distributor" 00:17:18.081 Message: lib/dmadev: Defining dependency "dmadev" 00:17:18.081 Message: lib/efd: Defining dependency "efd" 00:17:18.081 Message: lib/eventdev: Defining dependency "eventdev" 00:17:18.081 Message: lib/dispatcher: Defining dependency "dispatcher" 00:17:18.081 Message: lib/gpudev: Defining dependency "gpudev" 00:17:18.081 Message: lib/gro: Defining dependency "gro" 00:17:18.081 Message: lib/gso: Defining dependency "gso" 00:17:18.081 Message: lib/ip_frag: Defining dependency "ip_frag" 00:17:18.081 Message: lib/jobstats: Defining dependency "jobstats" 00:17:18.081 Message: lib/latencystats: Defining dependency "latencystats" 00:17:18.081 Message: lib/lpm: Defining dependency "lpm" 00:17:18.081 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:17:18.081 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:17:18.081 Fetching value of define "__AVX512IFMA__" : (undefined) 00:17:18.081 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:17:18.081 Message: lib/member: Defining dependency "member" 00:17:18.081 Message: lib/pcapng: Defining dependency "pcapng" 00:17:18.081 Compiler for C supports arguments -Wno-cast-qual: YES 00:17:18.081 Message: lib/power: Defining dependency "power" 00:17:18.081 Message: lib/rawdev: Defining dependency "rawdev" 00:17:18.081 Message: lib/regexdev: Defining dependency "regexdev" 00:17:18.081 Message: lib/mldev: Defining dependency "mldev" 00:17:18.081 Message: lib/rib: Defining dependency "rib" 00:17:18.081 Message: lib/reorder: Defining dependency "reorder" 00:17:18.081 Message: lib/sched: Defining dependency "sched" 00:17:18.081 Message: lib/security: Defining dependency "security" 00:17:18.081 Message: lib/stack: Defining dependency "stack" 00:17:18.081 Has header "linux/userfaultfd.h" : YES 00:17:18.081 Has header "linux/vduse.h" : YES 00:17:18.081 Message: lib/vhost: Defining dependency "vhost" 00:17:18.081 Message: lib/ipsec: Defining dependency "ipsec" 00:17:18.081 Message: lib/pdcp: Defining dependency "pdcp" 00:17:18.081 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:17:18.081 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:17:18.081 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:17:18.081 Compiler for C supports arguments -mavx512bw: YES (cached) 00:17:18.081 Message: lib/fib: Defining dependency "fib" 00:17:18.081 Message: lib/port: Defining dependency "port" 00:17:18.081 Message: lib/pdump: Defining dependency "pdump" 00:17:18.081 Message: lib/table: Defining dependency "table" 00:17:18.081 Message: lib/pipeline: Defining dependency "pipeline" 00:17:18.081 Message: lib/graph: Defining dependency "graph" 00:17:18.081 Message: lib/node: Defining dependency "node" 00:17:18.081 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:17:19.988 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:17:19.988 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:17:19.988 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:17:19.988 Compiler for C supports arguments -Wno-sign-compare: YES 00:17:19.988 Compiler for C supports arguments -Wno-unused-value: YES 00:17:19.988 Compiler for C supports arguments -Wno-format: YES 00:17:19.988 Compiler for C supports arguments -Wno-format-security: YES 00:17:19.988 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:17:19.988 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:17:19.988 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:17:19.988 Compiler for C supports arguments -Wno-unused-parameter: YES 00:17:19.988 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:17:19.988 Compiler for C supports arguments -mavx512f: YES (cached) 00:17:19.988 Compiler for C supports arguments -mavx512bw: YES (cached) 00:17:19.988 Compiler for C supports arguments -march=skylake-avx512: YES 00:17:19.988 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:17:19.988 Has header "sys/epoll.h" : YES 00:17:19.988 Program doxygen found: YES (/usr/bin/doxygen) 00:17:19.988 Configuring doxy-api-html.conf using configuration 00:17:19.988 Configuring doxy-api-man.conf using configuration 00:17:19.988 Program mandb found: YES (/usr/bin/mandb) 00:17:19.988 Program sphinx-build found: NO 00:17:19.988 Configuring rte_build_config.h using configuration 00:17:19.988 Message: 00:17:19.988 ================= 00:17:19.988 Applications Enabled 00:17:19.988 ================= 00:17:19.988 00:17:19.988 apps: 00:17:19.988 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:17:19.988 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:17:19.988 test-pmd, test-regex, test-sad, test-security-perf, 00:17:19.988 00:17:19.988 Message: 00:17:19.988 ================= 00:17:19.988 Libraries Enabled 00:17:19.988 ================= 00:17:19.988 00:17:19.988 libs: 00:17:19.988 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:17:19.988 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:17:19.988 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:17:19.988 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:17:19.988 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:17:19.988 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:17:19.988 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:17:19.988 00:17:19.988 00:17:19.988 Message: 00:17:19.988 =============== 00:17:19.988 Drivers Enabled 00:17:19.988 =============== 00:17:19.988 00:17:19.988 common: 00:17:19.988 00:17:19.988 bus: 00:17:19.988 pci, vdev, 00:17:19.988 mempool: 00:17:19.988 ring, 00:17:19.988 dma: 00:17:19.988 00:17:19.988 net: 00:17:19.988 i40e, 00:17:19.988 raw: 00:17:19.988 00:17:19.988 crypto: 00:17:19.988 00:17:19.988 compress: 00:17:19.988 00:17:19.988 regex: 00:17:19.988 00:17:19.988 ml: 00:17:19.988 00:17:19.988 vdpa: 00:17:19.988 00:17:19.988 event: 00:17:19.988 00:17:19.988 baseband: 00:17:19.988 00:17:19.988 gpu: 00:17:19.988 00:17:19.988 00:17:19.988 Message: 00:17:19.988 ================= 00:17:19.988 Content Skipped 00:17:19.988 ================= 00:17:19.988 00:17:19.988 apps: 00:17:19.988 00:17:19.988 libs: 00:17:19.988 00:17:19.988 drivers: 00:17:19.988 common/cpt: not in enabled drivers build config 00:17:19.988 common/dpaax: not in enabled drivers build config 00:17:19.988 common/iavf: not in enabled drivers build config 00:17:19.988 common/idpf: not in enabled drivers build config 00:17:19.988 common/mvep: not in enabled drivers build config 00:17:19.988 common/octeontx: not in enabled drivers build config 00:17:19.988 bus/auxiliary: not in enabled drivers build config 00:17:19.988 bus/cdx: not in enabled drivers build config 00:17:19.988 bus/dpaa: not in enabled drivers build config 00:17:19.988 bus/fslmc: not in enabled drivers build config 00:17:19.988 bus/ifpga: not in enabled drivers build config 00:17:19.988 bus/platform: not in enabled drivers build config 00:17:19.988 bus/vmbus: not in enabled drivers build config 00:17:19.988 common/cnxk: not in enabled drivers build config 00:17:19.988 common/mlx5: not in enabled drivers build config 00:17:19.988 common/nfp: not in enabled drivers build config 00:17:19.988 common/qat: not in enabled drivers build config 00:17:19.988 common/sfc_efx: not in enabled drivers build config 00:17:19.988 mempool/bucket: not in enabled drivers build config 00:17:19.988 mempool/cnxk: not in enabled drivers build config 00:17:19.988 mempool/dpaa: not in enabled drivers build config 00:17:19.988 mempool/dpaa2: not in enabled drivers build config 00:17:19.988 mempool/octeontx: not in enabled drivers build config 00:17:19.988 mempool/stack: not in enabled drivers build config 00:17:19.988 dma/cnxk: not in enabled drivers build config 00:17:19.988 dma/dpaa: not in enabled drivers build config 00:17:19.988 dma/dpaa2: not in enabled drivers build config 00:17:19.988 dma/hisilicon: not in enabled drivers build config 00:17:19.988 dma/idxd: not in enabled drivers build config 00:17:19.988 dma/ioat: not in enabled drivers build config 00:17:19.988 dma/skeleton: not in enabled drivers build config 00:17:19.988 net/af_packet: not in enabled drivers build config 00:17:19.988 net/af_xdp: not in enabled drivers build config 00:17:19.988 net/ark: not in enabled drivers build config 00:17:19.988 net/atlantic: not in enabled drivers build config 00:17:19.988 net/avp: not in enabled drivers build config 00:17:19.988 net/axgbe: not in enabled drivers build config 00:17:19.988 net/bnx2x: not in enabled drivers build config 00:17:19.988 net/bnxt: not in enabled drivers build config 00:17:19.988 net/bonding: not in enabled drivers build config 00:17:19.988 net/cnxk: not in enabled drivers build config 00:17:19.988 net/cpfl: not in enabled drivers build config 00:17:19.988 net/cxgbe: not in enabled drivers build config 00:17:19.988 net/dpaa: not in enabled drivers build config 00:17:19.988 net/dpaa2: not in enabled drivers build config 00:17:19.988 net/e1000: not in enabled drivers build config 00:17:19.988 net/ena: not in enabled drivers build config 00:17:19.988 net/enetc: not in enabled drivers build config 00:17:19.988 net/enetfec: not in enabled drivers build config 00:17:19.988 net/enic: not in enabled drivers build config 00:17:19.988 net/failsafe: not in enabled drivers build config 00:17:19.988 net/fm10k: not in enabled drivers build config 00:17:19.988 net/gve: not in enabled drivers build config 00:17:19.988 net/hinic: not in enabled drivers build config 00:17:19.988 net/hns3: not in enabled drivers build config 00:17:19.988 net/iavf: not in enabled drivers build config 00:17:19.988 net/ice: not in enabled drivers build config 00:17:19.988 net/idpf: not in enabled drivers build config 00:17:19.988 net/igc: not in enabled drivers build config 00:17:19.988 net/ionic: not in enabled drivers build config 00:17:19.988 net/ipn3ke: not in enabled drivers build config 00:17:19.988 net/ixgbe: not in enabled drivers build config 00:17:19.988 net/mana: not in enabled drivers build config 00:17:19.988 net/memif: not in enabled drivers build config 00:17:19.988 net/mlx4: not in enabled drivers build config 00:17:19.988 net/mlx5: not in enabled drivers build config 00:17:19.988 net/mvneta: not in enabled drivers build config 00:17:19.988 net/mvpp2: not in enabled drivers build config 00:17:19.988 net/netvsc: not in enabled drivers build config 00:17:19.988 net/nfb: not in enabled drivers build config 00:17:19.988 net/nfp: not in enabled drivers build config 00:17:19.988 net/ngbe: not in enabled drivers build config 00:17:19.988 net/null: not in enabled drivers build config 00:17:19.988 net/octeontx: not in enabled drivers build config 00:17:19.988 net/octeon_ep: not in enabled drivers build config 00:17:19.988 net/pcap: not in enabled drivers build config 00:17:19.988 net/pfe: not in enabled drivers build config 00:17:19.989 net/qede: not in enabled drivers build config 00:17:19.989 net/ring: not in enabled drivers build config 00:17:19.989 net/sfc: not in enabled drivers build config 00:17:19.989 net/softnic: not in enabled drivers build config 00:17:19.989 net/tap: not in enabled drivers build config 00:17:19.989 net/thunderx: not in enabled drivers build config 00:17:19.989 net/txgbe: not in enabled drivers build config 00:17:19.989 net/vdev_netvsc: not in enabled drivers build config 00:17:19.989 net/vhost: not in enabled drivers build config 00:17:19.989 net/virtio: not in enabled drivers build config 00:17:19.989 net/vmxnet3: not in enabled drivers build config 00:17:19.989 raw/cnxk_bphy: not in enabled drivers build config 00:17:19.989 raw/cnxk_gpio: not in enabled drivers build config 00:17:19.989 raw/dpaa2_cmdif: not in enabled drivers build config 00:17:19.989 raw/ifpga: not in enabled drivers build config 00:17:19.989 raw/ntb: not in enabled drivers build config 00:17:19.989 raw/skeleton: not in enabled drivers build config 00:17:19.989 crypto/armv8: not in enabled drivers build config 00:17:19.989 crypto/bcmfs: not in enabled drivers build config 00:17:19.989 crypto/caam_jr: not in enabled drivers build config 00:17:19.989 crypto/ccp: not in enabled drivers build config 00:17:19.989 crypto/cnxk: not in enabled drivers build config 00:17:19.989 crypto/dpaa_sec: not in enabled drivers build config 00:17:19.989 crypto/dpaa2_sec: not in enabled drivers build config 00:17:19.989 crypto/ipsec_mb: not in enabled drivers build config 00:17:19.989 crypto/mlx5: not in enabled drivers build config 00:17:19.989 crypto/mvsam: not in enabled drivers build config 00:17:19.989 crypto/nitrox: not in enabled drivers build config 00:17:19.989 crypto/null: not in enabled drivers build config 00:17:19.989 crypto/octeontx: not in enabled drivers build config 00:17:19.989 crypto/openssl: not in enabled drivers build config 00:17:19.989 crypto/scheduler: not in enabled drivers build config 00:17:19.989 crypto/uadk: not in enabled drivers build config 00:17:19.989 crypto/virtio: not in enabled drivers build config 00:17:19.989 compress/isal: not in enabled drivers build config 00:17:19.989 compress/mlx5: not in enabled drivers build config 00:17:19.989 compress/octeontx: not in enabled drivers build config 00:17:19.989 compress/zlib: not in enabled drivers build config 00:17:19.989 regex/mlx5: not in enabled drivers build config 00:17:19.989 regex/cn9k: not in enabled drivers build config 00:17:19.989 ml/cnxk: not in enabled drivers build config 00:17:19.989 vdpa/ifc: not in enabled drivers build config 00:17:19.989 vdpa/mlx5: not in enabled drivers build config 00:17:19.989 vdpa/nfp: not in enabled drivers build config 00:17:19.989 vdpa/sfc: not in enabled drivers build config 00:17:19.989 event/cnxk: not in enabled drivers build config 00:17:19.989 event/dlb2: not in enabled drivers build config 00:17:19.989 event/dpaa: not in enabled drivers build config 00:17:19.989 event/dpaa2: not in enabled drivers build config 00:17:19.989 event/dsw: not in enabled drivers build config 00:17:19.989 event/opdl: not in enabled drivers build config 00:17:19.989 event/skeleton: not in enabled drivers build config 00:17:19.989 event/sw: not in enabled drivers build config 00:17:19.989 event/octeontx: not in enabled drivers build config 00:17:19.989 baseband/acc: not in enabled drivers build config 00:17:19.989 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:17:19.989 baseband/fpga_lte_fec: not in enabled drivers build config 00:17:19.989 baseband/la12xx: not in enabled drivers build config 00:17:19.989 baseband/null: not in enabled drivers build config 00:17:19.989 baseband/turbo_sw: not in enabled drivers build config 00:17:19.989 gpu/cuda: not in enabled drivers build config 00:17:19.989 00:17:19.989 00:17:19.989 Build targets in project: 220 00:17:19.989 00:17:19.989 DPDK 23.11.0 00:17:19.989 00:17:19.989 User defined options 00:17:19.989 libdir : lib 00:17:19.989 prefix : /home/vagrant/spdk_repo/dpdk/build 00:17:19.989 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:17:19.989 c_link_args : 00:17:19.989 enable_docs : false 00:17:19.989 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:17:19.989 enable_kmods : false 00:17:19.989 machine : native 00:17:19.989 tests : false 00:17:19.989 00:17:19.989 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:17:19.989 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:17:19.989 00:43:23 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:17:19.989 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:17:20.248 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:17:20.248 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:17:20.248 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:17:20.248 [4/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:17:20.248 [5/710] Linking static target lib/librte_kvargs.a 00:17:20.248 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:17:20.248 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:17:20.505 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:17:20.505 [9/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:17:20.505 [10/710] Linking static target lib/librte_log.a 00:17:20.505 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:17:20.763 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:17:20.763 [13/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:17:21.021 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:17:21.021 [15/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:17:21.021 [16/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:17:21.021 [17/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:17:21.021 [18/710] Linking target lib/librte_log.so.24.0 00:17:21.021 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:17:21.281 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:17:21.540 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:17:21.540 [22/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:17:21.540 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:17:21.540 [24/710] Linking target lib/librte_kvargs.so.24.0 00:17:21.540 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:17:21.540 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:17:21.540 [27/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:17:21.798 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:17:21.798 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:17:21.798 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:17:21.798 [31/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:17:21.798 [32/710] Linking static target lib/librte_telemetry.a 00:17:22.055 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:17:22.055 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:17:22.055 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:17:22.314 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:17:22.314 [37/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:17:22.314 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:17:22.314 [39/710] Linking target lib/librte_telemetry.so.24.0 00:17:22.314 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:17:22.314 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:17:22.572 [42/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:17:22.572 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:17:22.572 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:17:22.572 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:17:22.572 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:17:22.830 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:17:22.830 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:17:22.830 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:17:23.089 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:17:23.089 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:17:23.089 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:17:23.089 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:17:23.346 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:17:23.346 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:17:23.346 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:17:23.346 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:17:23.605 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:17:23.605 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:17:23.605 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:17:23.605 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:17:23.605 [62/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:17:23.863 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:17:23.863 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:17:23.863 [65/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:17:23.863 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:17:24.121 [67/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:17:24.121 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:17:24.121 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:17:24.378 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:17:24.378 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:17:24.378 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:17:24.379 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:17:24.379 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:17:24.379 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:17:24.379 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:17:24.379 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:17:24.638 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:17:24.896 [79/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:17:24.896 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:17:24.896 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:17:24.896 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:17:25.154 [83/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:17:25.154 [84/710] Linking static target lib/librte_ring.a 00:17:25.412 [85/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:17:25.412 [86/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:17:25.671 [87/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:17:25.671 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:17:25.671 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:17:25.671 [90/710] Linking static target lib/librte_mempool.a 00:17:25.929 [91/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:17:25.929 [92/710] Linking static target lib/librte_rcu.a 00:17:25.929 [93/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:17:25.929 [94/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:17:25.929 [95/710] Linking static target lib/librte_eal.a 00:17:25.929 [96/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:17:26.187 [97/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:17:26.187 [98/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:17:26.187 [99/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:17:26.187 [100/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:17:26.187 [101/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:17:26.446 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:17:26.446 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:17:26.704 [104/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:17:26.704 [105/710] Linking static target lib/librte_mbuf.a 00:17:26.704 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:17:26.963 [107/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:17:26.963 [108/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:17:26.963 [109/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:17:26.963 [110/710] Linking static target lib/librte_meter.a 00:17:27.222 [111/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:17:27.222 [112/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:17:27.223 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:17:27.223 [114/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:17:27.223 [115/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:17:27.483 [116/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:17:27.483 [117/710] Linking static target lib/librte_net.a 00:17:27.483 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:17:27.741 [119/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:17:27.999 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:17:28.568 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:17:28.568 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:17:28.568 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:17:28.568 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:17:28.568 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:17:28.568 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:17:28.568 [127/710] Linking static target lib/librte_pci.a 00:17:28.827 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:17:28.827 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:17:29.086 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:17:29.086 [131/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:17:29.086 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:17:29.086 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:17:29.086 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:17:29.086 [135/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:17:29.086 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:17:29.344 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:17:29.344 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:17:29.344 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:17:29.345 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:17:29.345 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:17:29.603 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:17:29.603 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:17:29.862 [144/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:17:30.120 [145/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:17:30.120 [146/710] Linking static target lib/librte_cmdline.a 00:17:30.120 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:17:30.120 [148/710] Linking static target lib/librte_metrics.a 00:17:30.379 [149/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:17:30.637 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:17:30.637 [151/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:17:30.637 [152/710] Linking static target lib/librte_timer.a 00:17:30.637 [153/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:17:31.206 [154/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:17:31.206 [155/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:17:31.206 [156/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:17:31.464 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:17:32.401 [158/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:17:32.401 [159/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:17:32.401 [160/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:17:32.659 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:17:32.659 [162/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:17:32.917 [163/710] Linking static target lib/librte_hash.a 00:17:32.917 [164/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:17:32.917 [165/710] Linking static target lib/librte_bitratestats.a 00:17:33.176 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:17:33.176 [167/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:17:33.435 [168/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:17:33.435 [169/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:17:33.435 [170/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:17:33.435 [171/710] Linking static target lib/acl/libavx2_tmp.a 00:17:33.435 [172/710] Linking static target lib/librte_ethdev.a 00:17:33.693 [173/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:17:33.693 [174/710] Linking static target lib/librte_bbdev.a 00:17:33.693 [175/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:17:33.693 [176/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:17:33.693 [177/710] Linking target lib/librte_eal.so.24.0 00:17:33.951 [178/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:17:33.951 [179/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:17:33.951 [180/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:17:33.951 [181/710] Linking target lib/librte_ring.so.24.0 00:17:33.951 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:17:33.951 [183/710] Linking target lib/librte_meter.so.24.0 00:17:34.210 [184/710] Linking target lib/librte_pci.so.24.0 00:17:34.210 [185/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:17:34.210 [186/710] Linking target lib/librte_timer.so.24.0 00:17:34.210 [187/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:17:34.210 [188/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:17:34.210 [189/710] Linking target lib/librte_rcu.so.24.0 00:17:34.526 [190/710] Linking target lib/librte_mempool.so.24.0 00:17:34.526 [191/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:34.526 [192/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:17:34.526 [193/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:17:34.526 [194/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:17:34.526 [195/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:17:34.526 [196/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:17:34.526 [197/710] Linking static target lib/acl/libavx512_tmp.a 00:17:34.526 [198/710] Linking static target lib/librte_acl.a 00:17:34.526 [199/710] Linking target lib/librte_mbuf.so.24.0 00:17:34.526 [200/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:17:34.818 [201/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:17:34.818 [202/710] Linking target lib/librte_net.so.24.0 00:17:34.818 [203/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:17:34.818 [204/710] Linking target lib/librte_acl.so.24.0 00:17:35.076 [205/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:17:35.076 [206/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:17:35.076 [207/710] Linking target lib/librte_bbdev.so.24.0 00:17:35.076 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:17:35.076 [209/710] Linking target lib/librte_cmdline.so.24.0 00:17:35.076 [210/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:17:35.076 [211/710] Linking static target lib/librte_cfgfile.a 00:17:35.076 [212/710] Linking target lib/librte_hash.so.24.0 00:17:35.335 [213/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:17:35.335 [214/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:17:35.335 [215/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:17:35.335 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:17:35.335 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:17:35.335 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:17:35.335 [219/710] Linking static target lib/librte_bpf.a 00:17:35.593 [220/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:17:35.593 [221/710] Linking target lib/librte_cfgfile.so.24.0 00:17:35.852 [222/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:17:36.109 [223/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:17:36.109 [224/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:17:36.109 [225/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:17:36.109 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:17:36.367 [227/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:17:36.367 [228/710] Linking static target lib/librte_compressdev.a 00:17:36.625 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:17:36.625 [230/710] Linking static target lib/librte_distributor.a 00:17:36.884 [231/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:17:36.884 [232/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:17:36.884 [233/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:17:36.884 [234/710] Linking target lib/librte_distributor.so.24.0 00:17:37.143 [235/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:17:37.143 [236/710] Linking static target lib/librte_dmadev.a 00:17:37.143 [237/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:37.143 [238/710] Linking target lib/librte_compressdev.so.24.0 00:17:37.402 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:17:37.661 [240/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:37.661 [241/710] Linking target lib/librte_dmadev.so.24.0 00:17:37.661 [242/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:17:37.661 [243/710] Linking static target lib/librte_efd.a 00:17:37.661 [244/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:17:37.928 [245/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:17:38.186 [246/710] Linking target lib/librte_efd.so.24.0 00:17:38.186 [247/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:17:38.186 [248/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:17:38.186 [249/710] Linking static target lib/librte_cryptodev.a 00:17:38.445 [250/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:17:38.704 [251/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:17:38.962 [252/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:17:38.962 [253/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:17:38.962 [254/710] Linking static target lib/librte_dispatcher.a 00:17:38.962 [255/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:39.221 [256/710] Linking target lib/librte_ethdev.so.24.0 00:17:39.221 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:17:39.221 [258/710] Linking target lib/librte_metrics.so.24.0 00:17:39.480 [259/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:17:39.480 [260/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:17:39.480 [261/710] Linking static target lib/librte_gpudev.a 00:17:39.480 [262/710] Linking target lib/librte_bpf.so.24.0 00:17:39.480 [263/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:17:39.480 [264/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:17:39.480 [265/710] Linking target lib/librte_bitratestats.so.24.0 00:17:39.480 [266/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:17:39.739 [267/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:39.739 [268/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:17:39.739 [269/710] Linking target lib/librte_cryptodev.so.24.0 00:17:39.739 [270/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:17:39.739 [271/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:17:39.739 [272/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:17:39.998 [273/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:17:40.257 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:40.257 [275/710] Linking target lib/librte_gpudev.so.24.0 00:17:40.514 [276/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:17:40.514 [277/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:17:40.514 [278/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:17:40.514 [279/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:17:40.514 [280/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:17:40.514 [281/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:17:40.772 [282/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:17:41.030 [283/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:17:41.030 [284/710] Linking static target lib/librte_gro.a 00:17:41.030 [285/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:17:41.030 [286/710] Linking static target lib/librte_gso.a 00:17:41.289 [287/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:17:41.289 [288/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:17:41.289 [289/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:17:41.289 [290/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:17:41.289 [291/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:17:41.289 [292/710] Linking static target lib/librte_jobstats.a 00:17:41.289 [293/710] Linking target lib/librte_gso.so.24.0 00:17:41.289 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:17:41.289 [295/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:17:41.557 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:17:41.557 [297/710] Linking target lib/librte_gro.so.24.0 00:17:41.557 [298/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:17:41.557 [299/710] Linking static target lib/librte_ip_frag.a 00:17:41.557 [300/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:17:41.557 [301/710] Linking static target lib/librte_eventdev.a 00:17:41.839 [302/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:17:41.839 [303/710] Linking target lib/librte_jobstats.so.24.0 00:17:41.839 [304/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:17:42.097 [305/710] Linking target lib/librte_ip_frag.so.24.0 00:17:42.097 [306/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:17:42.097 [307/710] Linking static target lib/librte_latencystats.a 00:17:42.097 [308/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:17:42.097 [309/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:17:42.097 [310/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:17:42.097 [311/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:17:42.097 [312/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:17:42.355 [313/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:17:42.355 [314/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:17:42.355 [315/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:17:42.355 [316/710] Linking target lib/librte_latencystats.so.24.0 00:17:42.356 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:17:42.923 [318/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:17:42.923 [319/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:17:43.181 [320/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:17:43.181 [321/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:17:43.181 [322/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:17:43.181 [323/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:17:43.181 [324/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:17:43.181 [325/710] Linking static target lib/librte_pcapng.a 00:17:43.440 [326/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:17:43.440 [327/710] Linking static target lib/librte_lpm.a 00:17:43.440 [328/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:17:43.440 [329/710] Linking target lib/librte_pcapng.so.24.0 00:17:43.699 [330/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:17:43.699 [331/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:17:43.699 [332/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:17:43.699 [333/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:17:43.699 [334/710] Linking target lib/librte_lpm.so.24.0 00:17:43.957 [335/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:17:43.957 [336/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:17:43.957 [337/710] Linking static target lib/librte_power.a 00:17:43.957 [338/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:17:43.957 [339/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:17:43.957 [340/710] Linking static target lib/librte_regexdev.a 00:17:43.957 [341/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:17:43.957 [342/710] Linking static target lib/librte_member.a 00:17:43.957 [343/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:44.216 [344/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:17:44.216 [345/710] Linking static target lib/librte_rawdev.a 00:17:44.216 [346/710] Linking target lib/librte_eventdev.so.24.0 00:17:44.216 [347/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:17:44.216 [348/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:17:44.216 [349/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:17:44.216 [350/710] Linking target lib/librte_dispatcher.so.24.0 00:17:44.216 [351/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:17:44.475 [352/710] Linking target lib/librte_member.so.24.0 00:17:44.475 [353/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:17:44.475 [354/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:44.475 [355/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:17:44.734 [356/710] Linking target lib/librte_rawdev.so.24.0 00:17:44.734 [357/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:17:44.734 [358/710] Linking target lib/librte_power.so.24.0 00:17:44.734 [359/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:17:44.734 [360/710] Linking static target lib/librte_mldev.a 00:17:44.734 [361/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:44.734 [362/710] Linking target lib/librte_regexdev.so.24.0 00:17:44.734 [363/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:17:45.068 [364/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:17:45.068 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:17:45.329 [366/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:17:45.329 [367/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:17:45.329 [368/710] Linking static target lib/librte_reorder.a 00:17:45.329 [369/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:17:45.329 [370/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:17:45.329 [371/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:17:45.588 [372/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:17:45.588 [373/710] Linking static target lib/librte_security.a 00:17:45.588 [374/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:17:45.588 [375/710] Linking static target lib/librte_rib.a 00:17:45.588 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:17:45.588 [377/710] Linking target lib/librte_reorder.so.24.0 00:17:45.588 [378/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:17:45.588 [379/710] Linking static target lib/librte_stack.a 00:17:45.847 [380/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:17:45.847 [381/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:17:45.847 [382/710] Linking target lib/librte_stack.so.24.0 00:17:46.106 [383/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:46.106 [384/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:17:46.106 [385/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:17:46.106 [386/710] Linking target lib/librte_mldev.so.24.0 00:17:46.106 [387/710] Linking target lib/librte_security.so.24.0 00:17:46.106 [388/710] Linking target lib/librte_rib.so.24.0 00:17:46.106 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:17:46.106 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:17:46.106 [391/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:17:46.106 [392/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:17:46.365 [393/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:17:46.365 [394/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:17:46.365 [395/710] Linking static target lib/librte_sched.a 00:17:46.932 [396/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:17:46.932 [397/710] Linking target lib/librte_sched.so.24.0 00:17:46.932 [398/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:17:46.932 [399/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:17:46.932 [400/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:17:46.932 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:17:47.191 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:17:47.450 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:17:47.710 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:17:47.710 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:17:47.710 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:17:47.969 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:17:48.227 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:17:48.227 [409/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:17:48.227 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:17:48.566 [411/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:17:48.566 [412/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:17:48.566 [413/710] Linking static target lib/librte_ipsec.a 00:17:48.566 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:17:48.566 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:17:48.566 [416/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:17:48.825 [417/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:17:48.825 [418/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:17:48.825 [419/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:17:48.825 [420/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:17:48.825 [421/710] Linking target lib/librte_ipsec.so.24.0 00:17:49.084 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:17:49.084 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:17:49.651 [424/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:17:49.651 [425/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:17:49.910 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:17:49.910 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:17:49.910 [428/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:17:49.910 [429/710] Linking static target lib/librte_fib.a 00:17:49.910 [430/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:17:49.910 [431/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:17:49.910 [432/710] Linking static target lib/librte_pdcp.a 00:17:50.168 [433/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:17:50.168 [434/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:17:50.168 [435/710] Linking target lib/librte_fib.so.24.0 00:17:50.426 [436/710] Linking target lib/librte_pdcp.so.24.0 00:17:50.426 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:17:50.993 [438/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:17:50.993 [439/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:17:50.993 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:17:50.993 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:17:50.993 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:17:51.251 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:17:51.251 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:17:51.509 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:17:51.767 [446/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:17:51.767 [447/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:17:51.768 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:17:51.768 [449/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:17:51.768 [450/710] Linking static target lib/librte_port.a 00:17:52.026 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:17:52.026 [452/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:17:52.284 [453/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:17:52.284 [454/710] Linking static target lib/librte_pdump.a 00:17:52.284 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:17:52.543 [456/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:17:52.543 [457/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:17:52.543 [458/710] Linking target lib/librte_pdump.so.24.0 00:17:52.543 [459/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:17:52.543 [460/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:17:52.543 [461/710] Linking target lib/librte_port.so.24.0 00:17:52.802 [462/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:17:52.802 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:17:53.369 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:17:53.369 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:17:53.369 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:17:53.369 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:17:53.369 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:17:53.628 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:17:53.628 [470/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:17:53.628 [471/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:17:53.886 [472/710] Linking static target lib/librte_table.a 00:17:53.886 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:17:54.453 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:17:54.453 [475/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:17:54.453 [476/710] Linking target lib/librte_table.so.24.0 00:17:54.453 [477/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:17:54.453 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:17:54.711 [479/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:17:54.969 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:17:54.969 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:17:55.227 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:17:55.227 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:17:55.486 [484/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:17:55.486 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:17:55.486 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:17:56.053 [487/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:17:56.053 [488/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:17:56.053 [489/710] Linking static target lib/librte_graph.a 00:17:56.053 [490/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:17:56.053 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:17:56.054 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:17:56.312 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:17:56.570 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:17:56.570 [495/710] Linking target lib/librte_graph.so.24.0 00:17:56.828 [496/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:17:56.828 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:17:56.828 [498/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:17:56.828 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:17:57.394 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:17:57.394 [501/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:17:57.394 [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:17:57.653 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:17:57.653 [504/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:17:57.653 [505/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:17:57.653 [506/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:17:57.911 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:17:57.911 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:17:58.170 [509/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:17:58.170 [510/710] Linking static target lib/librte_node.a 00:17:58.428 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:17:58.428 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:17:58.428 [513/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:17:58.428 [514/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:17:58.428 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:17:58.686 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:17:58.686 [517/710] Linking target lib/librte_node.so.24.0 00:17:58.944 [518/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:17:58.944 [519/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:17:58.944 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:17:58.944 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:17:58.944 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:17:58.944 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:17:58.944 [524/710] Linking static target drivers/librte_bus_vdev.a 00:17:59.203 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:17:59.203 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:17:59.203 [527/710] Linking static target drivers/librte_bus_pci.a 00:17:59.203 [528/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:17:59.203 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:17:59.203 [530/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:17:59.461 [531/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:17:59.461 [532/710] Linking target drivers/librte_bus_vdev.so.24.0 00:17:59.462 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:17:59.462 [534/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:17:59.462 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:17:59.720 [536/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:17:59.720 [537/710] Linking target drivers/librte_bus_pci.so.24.0 00:17:59.720 [538/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:17:59.720 [539/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:17:59.979 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:17:59.979 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:17:59.979 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:17:59.979 [543/710] Linking static target drivers/librte_mempool_ring.a 00:17:59.979 [544/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:17:59.979 [545/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:17:59.979 [546/710] Linking target drivers/librte_mempool_ring.so.24.0 00:18:00.545 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:18:00.803 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:18:00.803 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:18:00.803 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:18:00.803 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:18:01.737 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:18:01.737 [553/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:18:01.995 [554/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:18:01.995 [555/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:18:01.995 [556/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:18:01.995 [557/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:18:02.253 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:18:02.512 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:18:02.770 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:18:02.770 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:18:02.770 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:18:03.337 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:18:03.337 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:18:03.595 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:18:03.595 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:18:03.854 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:18:04.113 [568/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:18:04.113 [569/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:18:04.113 [570/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:18:04.113 [571/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:18:04.113 [572/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:18:04.371 [573/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:18:04.629 [574/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:18:04.888 [575/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:18:04.888 [576/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:18:04.888 [577/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:18:04.888 [578/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:18:05.146 [579/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:18:05.146 [580/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:18:05.404 [581/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:18:05.662 [582/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:18:05.662 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:18:05.662 [584/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:18:05.662 [585/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:18:05.662 [586/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:18:05.662 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:18:05.662 [588/710] Linking static target drivers/librte_net_i40e.a 00:18:05.662 [589/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:18:05.662 [590/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:18:05.662 [591/710] Linking static target lib/librte_vhost.a 00:18:05.920 [592/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:18:06.178 [593/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:18:06.437 [594/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:18:06.437 [595/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:18:06.437 [596/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:18:06.437 [597/710] Linking target drivers/librte_net_i40e.so.24.0 00:18:07.075 [598/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:18:07.075 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:18:07.075 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:18:07.075 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:18:07.075 [602/710] Linking target lib/librte_vhost.so.24.0 00:18:07.075 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:18:07.333 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:18:07.333 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:18:07.591 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:18:07.591 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:18:08.158 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:18:08.158 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:18:08.158 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:18:08.158 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:18:08.158 [612/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:18:08.416 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:18:08.416 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:18:08.416 [615/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:18:08.416 [616/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:18:08.416 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:18:08.675 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:18:08.934 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:18:09.192 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:18:09.192 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:18:09.192 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:18:09.451 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:18:10.389 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:18:10.389 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:18:10.389 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:18:10.389 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:18:10.389 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:18:10.646 [629/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:18:10.646 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:18:10.646 [631/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:18:10.905 [632/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:18:10.905 [633/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:18:10.905 [634/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:18:11.163 [635/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:18:11.163 [636/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:18:11.423 [637/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:18:11.681 [638/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:18:11.681 [639/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:18:11.681 [640/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:18:11.681 [641/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:18:11.681 [642/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:18:11.940 [643/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:18:11.940 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:18:12.199 [645/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:18:12.199 [646/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:18:12.199 [647/710] Linking static target lib/librte_pipeline.a 00:18:12.199 [648/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:18:12.458 [649/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:18:12.458 [650/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:18:12.458 [651/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:18:12.458 [652/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:18:12.717 [653/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:18:12.976 [654/710] Linking target app/dpdk-dumpcap 00:18:12.976 [655/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:18:12.976 [656/710] Linking target app/dpdk-pdump 00:18:12.976 [657/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:18:12.976 [658/710] Linking target app/dpdk-graph 00:18:12.976 [659/710] Linking target app/dpdk-proc-info 00:18:13.234 [660/710] Linking target app/dpdk-test-acl 00:18:13.234 [661/710] Linking target app/dpdk-test-bbdev 00:18:13.494 [662/710] Linking target app/dpdk-test-cmdline 00:18:13.494 [663/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:18:13.494 [664/710] Linking target app/dpdk-test-compress-perf 00:18:13.494 [665/710] Linking target app/dpdk-test-crypto-perf 00:18:13.494 [666/710] Linking target app/dpdk-test-dma-perf 00:18:13.494 [667/710] Linking target app/dpdk-test-eventdev 00:18:13.754 [668/710] Linking target app/dpdk-test-fib 00:18:13.754 [669/710] Linking target app/dpdk-test-flow-perf 00:18:13.754 [670/710] Linking target app/dpdk-test-gpudev 00:18:14.013 [671/710] Linking target app/dpdk-test-mldev 00:18:14.013 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:18:14.272 [673/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:18:14.272 [674/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:18:14.272 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:18:14.531 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:18:14.790 [677/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:18:14.790 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:18:15.048 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:18:15.048 [680/710] Linking target app/dpdk-test-pipeline 00:18:15.306 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:18:15.306 [682/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:18:15.306 [683/710] Linking target lib/librte_pipeline.so.24.0 00:18:15.874 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:18:15.874 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:18:15.874 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:18:15.874 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:18:16.133 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:18:16.133 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:18:16.392 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:18:16.650 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:18:16.650 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:18:16.650 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:18:16.909 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:18:17.477 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:18:17.477 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:18:17.736 [697/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:18:17.736 [698/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:18:17.736 [699/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:18:17.736 [700/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:18:17.999 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:18:17.999 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:18:18.256 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:18:18.256 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:18:18.256 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:18:18.256 [706/710] Linking target app/dpdk-test-regex 00:18:18.256 [707/710] Linking target app/dpdk-test-sad 00:18:18.885 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:18:18.885 [709/710] Linking target app/dpdk-testpmd 00:18:19.453 [710/710] Linking target app/dpdk-test-security-perf 00:18:19.453 00:44:22 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:18:19.453 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:18:19.453 [0/1] Installing files. 00:18:19.716 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:18:19.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:18:19.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:18:19.719 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:18:19.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:18:19.720 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.720 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.721 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:20.292 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:20.292 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:20.292 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:20.292 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:18:20.292 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:20.292 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:18:20.292 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:20.292 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:18:20.292 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:18:20.292 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:18:20.292 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.292 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.293 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.294 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.295 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.296 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:18:20.297 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:18:20.297 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:18:20.297 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:18:20.297 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:18:20.297 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:18:20.297 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:18:20.297 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:18:20.297 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:18:20.297 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:18:20.297 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:18:20.297 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:18:20.297 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:18:20.297 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:18:20.297 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:18:20.297 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:18:20.297 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:18:20.297 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:18:20.297 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:18:20.297 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:18:20.297 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:18:20.297 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:18:20.297 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:18:20.297 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:18:20.298 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:18:20.298 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:18:20.298 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:18:20.298 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:18:20.298 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:18:20.298 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:18:20.298 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:18:20.298 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:18:20.298 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:18:20.298 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:18:20.298 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:18:20.298 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:18:20.298 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:18:20.298 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:18:20.298 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:18:20.298 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:18:20.298 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:18:20.298 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:18:20.298 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:18:20.298 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:18:20.298 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:18:20.298 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:18:20.298 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:18:20.298 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:18:20.298 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:18:20.298 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:18:20.298 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:18:20.298 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:18:20.298 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:18:20.298 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:18:20.298 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:18:20.298 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:18:20.298 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:18:20.298 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:18:20.298 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:18:20.298 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:18:20.298 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:18:20.298 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:18:20.298 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:18:20.298 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:18:20.298 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:18:20.298 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:18:20.298 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:18:20.298 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:18:20.298 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:18:20.298 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:18:20.298 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:18:20.298 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:18:20.298 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:18:20.298 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:18:20.298 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:18:20.298 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:18:20.298 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:18:20.298 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:18:20.298 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:18:20.298 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:18:20.298 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:18:20.298 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:18:20.298 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:18:20.298 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:18:20.298 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:18:20.298 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:18:20.298 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:18:20.298 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:18:20.298 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:18:20.298 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:18:20.298 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:18:20.298 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:18:20.298 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:18:20.298 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:18:20.298 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:18:20.298 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:18:20.298 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:18:20.298 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:18:20.298 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:18:20.298 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:18:20.298 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:18:20.299 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:18:20.299 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:18:20.299 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:18:20.299 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:18:20.299 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:18:20.299 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:18:20.299 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:18:20.299 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:18:20.299 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:18:20.299 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:18:20.299 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:18:20.299 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:18:20.299 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:18:20.299 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:18:20.299 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:18:20.299 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:18:20.299 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:18:20.299 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:18:20.299 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:18:20.299 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:18:20.299 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:18:20.299 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:18:20.299 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:18:20.299 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:18:20.299 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:18:20.299 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:18:20.299 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:18:20.299 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:18:20.299 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:18:20.299 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:18:20.299 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:18:20.299 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:18:20.299 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:18:20.299 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:18:20.299 00:44:23 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:18:20.299 00:44:23 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:18:20.299 00:44:23 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:18:20.299 00:44:23 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:18:20.299 00:18:20.299 real 1m7.523s 00:18:20.299 user 8m14.659s 00:18:20.299 sys 1m21.105s 00:18:20.299 00:44:23 build_native_dpdk -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:18:20.299 00:44:23 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:18:20.299 ************************************ 00:18:20.299 END TEST build_native_dpdk 00:18:20.299 ************************************ 00:18:20.299 00:44:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:18:20.299 00:44:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:18:20.299 00:44:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:18:20.299 00:44:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:18:20.299 00:44:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:18:20.299 00:44:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:18:20.299 00:44:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:18:20.299 00:44:23 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:18:20.557 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:18:20.557 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:18:20.557 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:18:20.557 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:21.123 Using 'verbs' RDMA provider 00:18:36.606 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:18:48.810 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:18:48.810 go version go1.21.1 linux/amd64 00:18:48.810 Creating mk/config.mk...done. 00:18:48.810 Creating mk/cc.flags.mk...done. 00:18:48.810 Type 'make' to build. 00:18:48.810 00:44:51 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:18:48.810 00:44:51 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:18:48.810 00:44:51 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:18:48.810 00:44:51 -- common/autotest_common.sh@10 -- $ set +x 00:18:48.810 ************************************ 00:18:48.810 START TEST make 00:18:48.810 ************************************ 00:18:48.810 00:44:51 make -- common/autotest_common.sh@1122 -- $ make -j10 00:18:48.810 make[1]: Nothing to be done for 'all'. 00:19:20.885 CC lib/log/log.o 00:19:20.885 CC lib/log/log_flags.o 00:19:20.885 CC lib/log/log_deprecated.o 00:19:20.885 CC lib/ut_mock/mock.o 00:19:20.885 CC lib/ut/ut.o 00:19:20.885 LIB libspdk_ut_mock.a 00:19:20.885 SO libspdk_ut_mock.so.6.0 00:19:20.885 LIB libspdk_log.a 00:19:20.885 LIB libspdk_ut.a 00:19:20.885 SYMLINK libspdk_ut_mock.so 00:19:20.885 SO libspdk_log.so.7.0 00:19:20.885 SO libspdk_ut.so.2.0 00:19:20.885 SYMLINK libspdk_log.so 00:19:20.885 SYMLINK libspdk_ut.so 00:19:20.885 CC lib/dma/dma.o 00:19:20.885 CC lib/util/base64.o 00:19:20.885 CC lib/ioat/ioat.o 00:19:20.885 CC lib/util/bit_array.o 00:19:20.885 CC lib/util/cpuset.o 00:19:20.885 CC lib/util/crc16.o 00:19:20.885 CC lib/util/crc32.o 00:19:20.885 CC lib/util/crc32c.o 00:19:20.885 CXX lib/trace_parser/trace.o 00:19:20.885 CC lib/vfio_user/host/vfio_user_pci.o 00:19:20.885 CC lib/util/crc32_ieee.o 00:19:20.885 CC lib/util/crc64.o 00:19:20.885 CC lib/util/dif.o 00:19:20.885 LIB libspdk_dma.a 00:19:20.885 CC lib/util/fd.o 00:19:20.885 CC lib/util/file.o 00:19:20.885 SO libspdk_dma.so.4.0 00:19:20.885 CC lib/util/hexlify.o 00:19:20.885 CC lib/vfio_user/host/vfio_user.o 00:19:20.885 SYMLINK libspdk_dma.so 00:19:20.885 CC lib/util/iov.o 00:19:20.885 LIB libspdk_ioat.a 00:19:20.885 CC lib/util/math.o 00:19:20.885 SO libspdk_ioat.so.7.0 00:19:20.885 CC lib/util/pipe.o 00:19:20.885 CC lib/util/strerror_tls.o 00:19:20.885 SYMLINK libspdk_ioat.so 00:19:20.885 CC lib/util/string.o 00:19:20.885 CC lib/util/uuid.o 00:19:20.885 CC lib/util/fd_group.o 00:19:20.885 CC lib/util/xor.o 00:19:20.885 CC lib/util/zipf.o 00:19:20.885 LIB libspdk_vfio_user.a 00:19:20.885 SO libspdk_vfio_user.so.5.0 00:19:20.885 SYMLINK libspdk_vfio_user.so 00:19:20.885 LIB libspdk_util.a 00:19:20.885 SO libspdk_util.so.9.0 00:19:20.885 LIB libspdk_trace_parser.a 00:19:20.885 SO libspdk_trace_parser.so.5.0 00:19:20.885 SYMLINK libspdk_util.so 00:19:20.885 SYMLINK libspdk_trace_parser.so 00:19:20.885 CC lib/env_dpdk/env.o 00:19:20.885 CC lib/env_dpdk/pci.o 00:19:20.885 CC lib/env_dpdk/memory.o 00:19:20.885 CC lib/env_dpdk/init.o 00:19:20.885 CC lib/env_dpdk/threads.o 00:19:20.885 CC lib/json/json_parse.o 00:19:20.885 CC lib/idxd/idxd.o 00:19:20.885 CC lib/conf/conf.o 00:19:20.885 CC lib/vmd/vmd.o 00:19:20.885 CC lib/rdma/common.o 00:19:20.885 CC lib/rdma/rdma_verbs.o 00:19:20.885 CC lib/json/json_util.o 00:19:20.885 LIB libspdk_conf.a 00:19:20.885 CC lib/json/json_write.o 00:19:20.885 LIB libspdk_rdma.a 00:19:20.885 SO libspdk_conf.so.6.0 00:19:20.885 CC lib/idxd/idxd_user.o 00:19:20.885 SO libspdk_rdma.so.6.0 00:19:20.885 SYMLINK libspdk_conf.so 00:19:20.885 SYMLINK libspdk_rdma.so 00:19:20.885 CC lib/env_dpdk/pci_ioat.o 00:19:20.885 CC lib/vmd/led.o 00:19:20.885 CC lib/env_dpdk/pci_virtio.o 00:19:20.885 CC lib/env_dpdk/pci_vmd.o 00:19:20.885 CC lib/env_dpdk/pci_idxd.o 00:19:20.885 CC lib/env_dpdk/pci_event.o 00:19:20.885 CC lib/env_dpdk/sigbus_handler.o 00:19:20.885 LIB libspdk_vmd.a 00:19:20.885 LIB libspdk_json.a 00:19:20.885 SO libspdk_vmd.so.6.0 00:19:20.886 SO libspdk_json.so.6.0 00:19:20.886 CC lib/env_dpdk/pci_dpdk.o 00:19:20.886 SYMLINK libspdk_vmd.so 00:19:20.886 CC lib/env_dpdk/pci_dpdk_2207.o 00:19:20.886 CC lib/env_dpdk/pci_dpdk_2211.o 00:19:20.886 SYMLINK libspdk_json.so 00:19:20.886 LIB libspdk_idxd.a 00:19:20.886 SO libspdk_idxd.so.12.0 00:19:20.886 SYMLINK libspdk_idxd.so 00:19:20.886 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:19:20.886 CC lib/jsonrpc/jsonrpc_server.o 00:19:20.886 CC lib/jsonrpc/jsonrpc_client.o 00:19:20.886 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:19:20.886 LIB libspdk_jsonrpc.a 00:19:20.886 SO libspdk_jsonrpc.so.6.0 00:19:20.886 LIB libspdk_env_dpdk.a 00:19:20.886 SYMLINK libspdk_jsonrpc.so 00:19:20.886 SO libspdk_env_dpdk.so.14.0 00:19:21.145 SYMLINK libspdk_env_dpdk.so 00:19:21.145 CC lib/rpc/rpc.o 00:19:21.403 LIB libspdk_rpc.a 00:19:21.403 SO libspdk_rpc.so.6.0 00:19:21.661 SYMLINK libspdk_rpc.so 00:19:21.661 CC lib/notify/notify.o 00:19:21.661 CC lib/trace/trace.o 00:19:21.661 CC lib/notify/notify_rpc.o 00:19:21.661 CC lib/trace/trace_rpc.o 00:19:21.661 CC lib/trace/trace_flags.o 00:19:21.919 CC lib/keyring/keyring.o 00:19:21.919 CC lib/keyring/keyring_rpc.o 00:19:22.178 LIB libspdk_keyring.a 00:19:22.178 LIB libspdk_notify.a 00:19:22.178 SO libspdk_keyring.so.1.0 00:19:22.178 SO libspdk_notify.so.6.0 00:19:22.178 LIB libspdk_trace.a 00:19:22.178 SYMLINK libspdk_notify.so 00:19:22.178 SYMLINK libspdk_keyring.so 00:19:22.178 SO libspdk_trace.so.10.0 00:19:22.437 SYMLINK libspdk_trace.so 00:19:22.696 CC lib/thread/thread.o 00:19:22.696 CC lib/thread/iobuf.o 00:19:22.696 CC lib/sock/sock_rpc.o 00:19:22.696 CC lib/sock/sock.o 00:19:23.265 LIB libspdk_sock.a 00:19:23.265 SO libspdk_sock.so.9.0 00:19:23.265 SYMLINK libspdk_sock.so 00:19:23.524 CC lib/nvme/nvme_ctrlr_cmd.o 00:19:23.524 CC lib/nvme/nvme_ctrlr.o 00:19:23.524 CC lib/nvme/nvme_fabric.o 00:19:23.524 CC lib/nvme/nvme_ns_cmd.o 00:19:23.524 CC lib/nvme/nvme_ns.o 00:19:23.524 CC lib/nvme/nvme_pcie.o 00:19:23.524 CC lib/nvme/nvme_pcie_common.o 00:19:23.524 CC lib/nvme/nvme_qpair.o 00:19:23.524 CC lib/nvme/nvme.o 00:19:24.459 CC lib/nvme/nvme_quirks.o 00:19:24.459 LIB libspdk_thread.a 00:19:24.459 SO libspdk_thread.so.10.0 00:19:24.459 SYMLINK libspdk_thread.so 00:19:24.459 CC lib/nvme/nvme_transport.o 00:19:24.459 CC lib/nvme/nvme_discovery.o 00:19:24.459 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:19:24.717 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:19:24.717 CC lib/nvme/nvme_tcp.o 00:19:25.041 CC lib/accel/accel.o 00:19:25.301 CC lib/blob/blobstore.o 00:19:25.301 CC lib/init/json_config.o 00:19:25.301 CC lib/virtio/virtio.o 00:19:25.301 CC lib/virtio/virtio_vhost_user.o 00:19:25.301 CC lib/init/subsystem.o 00:19:25.560 CC lib/accel/accel_rpc.o 00:19:25.560 CC lib/blob/request.o 00:19:25.560 CC lib/blob/zeroes.o 00:19:25.560 CC lib/blob/blob_bs_dev.o 00:19:25.560 CC lib/accel/accel_sw.o 00:19:25.560 CC lib/init/subsystem_rpc.o 00:19:25.820 CC lib/virtio/virtio_vfio_user.o 00:19:25.820 CC lib/init/rpc.o 00:19:25.820 CC lib/virtio/virtio_pci.o 00:19:26.078 CC lib/nvme/nvme_opal.o 00:19:26.078 CC lib/nvme/nvme_io_msg.o 00:19:26.078 CC lib/nvme/nvme_poll_group.o 00:19:26.078 CC lib/nvme/nvme_zns.o 00:19:26.078 LIB libspdk_init.a 00:19:26.078 CC lib/nvme/nvme_stubs.o 00:19:26.078 SO libspdk_init.so.5.0 00:19:26.337 SYMLINK libspdk_init.so 00:19:26.337 CC lib/nvme/nvme_auth.o 00:19:26.337 LIB libspdk_virtio.a 00:19:26.337 SO libspdk_virtio.so.7.0 00:19:26.596 SYMLINK libspdk_virtio.so 00:19:26.596 CC lib/nvme/nvme_cuse.o 00:19:26.596 CC lib/nvme/nvme_rdma.o 00:19:26.596 LIB libspdk_accel.a 00:19:26.596 SO libspdk_accel.so.15.0 00:19:26.596 SYMLINK libspdk_accel.so 00:19:26.596 CC lib/event/app.o 00:19:26.596 CC lib/event/reactor.o 00:19:26.596 CC lib/event/log_rpc.o 00:19:26.855 CC lib/event/app_rpc.o 00:19:26.855 CC lib/event/scheduler_static.o 00:19:27.113 CC lib/bdev/bdev.o 00:19:27.114 CC lib/bdev/bdev_rpc.o 00:19:27.114 CC lib/bdev/bdev_zone.o 00:19:27.114 CC lib/bdev/part.o 00:19:27.114 CC lib/bdev/scsi_nvme.o 00:19:27.114 LIB libspdk_event.a 00:19:27.114 SO libspdk_event.so.13.0 00:19:27.373 SYMLINK libspdk_event.so 00:19:28.307 LIB libspdk_nvme.a 00:19:28.307 SO libspdk_nvme.so.13.0 00:19:28.874 SYMLINK libspdk_nvme.so 00:19:28.874 LIB libspdk_blob.a 00:19:28.874 SO libspdk_blob.so.11.0 00:19:29.134 SYMLINK libspdk_blob.so 00:19:29.393 CC lib/blobfs/blobfs.o 00:19:29.393 CC lib/blobfs/tree.o 00:19:29.393 CC lib/lvol/lvol.o 00:19:30.328 LIB libspdk_bdev.a 00:19:30.328 SO libspdk_bdev.so.15.0 00:19:30.329 LIB libspdk_blobfs.a 00:19:30.329 SO libspdk_blobfs.so.10.0 00:19:30.329 SYMLINK libspdk_bdev.so 00:19:30.329 LIB libspdk_lvol.a 00:19:30.329 SO libspdk_lvol.so.10.0 00:19:30.329 SYMLINK libspdk_blobfs.so 00:19:30.329 SYMLINK libspdk_lvol.so 00:19:30.586 CC lib/scsi/dev.o 00:19:30.586 CC lib/scsi/lun.o 00:19:30.586 CC lib/nvmf/ctrlr.o 00:19:30.587 CC lib/ftl/ftl_core.o 00:19:30.587 CC lib/nvmf/ctrlr_discovery.o 00:19:30.587 CC lib/scsi/port.o 00:19:30.587 CC lib/ublk/ublk.o 00:19:30.587 CC lib/scsi/scsi.o 00:19:30.587 CC lib/ftl/ftl_init.o 00:19:30.587 CC lib/nbd/nbd.o 00:19:30.587 CC lib/scsi/scsi_bdev.o 00:19:30.587 CC lib/scsi/scsi_pr.o 00:19:30.844 CC lib/scsi/scsi_rpc.o 00:19:30.844 CC lib/ftl/ftl_layout.o 00:19:30.844 CC lib/ublk/ublk_rpc.o 00:19:30.844 CC lib/ftl/ftl_debug.o 00:19:30.844 CC lib/ftl/ftl_io.o 00:19:31.102 CC lib/nbd/nbd_rpc.o 00:19:31.102 CC lib/nvmf/ctrlr_bdev.o 00:19:31.102 CC lib/nvmf/subsystem.o 00:19:31.102 CC lib/scsi/task.o 00:19:31.102 CC lib/nvmf/nvmf.o 00:19:31.102 CC lib/nvmf/nvmf_rpc.o 00:19:31.102 CC lib/ftl/ftl_sb.o 00:19:31.102 LIB libspdk_nbd.a 00:19:31.361 CC lib/ftl/ftl_l2p.o 00:19:31.361 SO libspdk_nbd.so.7.0 00:19:31.361 LIB libspdk_ublk.a 00:19:31.361 SYMLINK libspdk_nbd.so 00:19:31.361 CC lib/nvmf/transport.o 00:19:31.361 SO libspdk_ublk.so.3.0 00:19:31.361 LIB libspdk_scsi.a 00:19:31.361 CC lib/ftl/ftl_l2p_flat.o 00:19:31.361 SYMLINK libspdk_ublk.so 00:19:31.361 CC lib/ftl/ftl_nv_cache.o 00:19:31.361 SO libspdk_scsi.so.9.0 00:19:31.620 CC lib/nvmf/tcp.o 00:19:31.620 SYMLINK libspdk_scsi.so 00:19:31.620 CC lib/nvmf/stubs.o 00:19:31.620 CC lib/ftl/ftl_band.o 00:19:31.879 CC lib/ftl/ftl_band_ops.o 00:19:32.138 CC lib/nvmf/mdns_server.o 00:19:32.138 CC lib/nvmf/rdma.o 00:19:32.138 CC lib/nvmf/auth.o 00:19:32.138 CC lib/ftl/ftl_writer.o 00:19:32.397 CC lib/ftl/ftl_rq.o 00:19:32.397 CC lib/iscsi/conn.o 00:19:32.397 CC lib/vhost/vhost.o 00:19:32.397 CC lib/vhost/vhost_rpc.o 00:19:32.397 CC lib/ftl/ftl_reloc.o 00:19:32.397 CC lib/vhost/vhost_scsi.o 00:19:32.397 CC lib/ftl/ftl_l2p_cache.o 00:19:32.655 CC lib/ftl/ftl_p2l.o 00:19:32.655 CC lib/vhost/vhost_blk.o 00:19:32.915 CC lib/iscsi/init_grp.o 00:19:32.915 CC lib/iscsi/iscsi.o 00:19:32.915 CC lib/vhost/rte_vhost_user.o 00:19:33.184 CC lib/iscsi/md5.o 00:19:33.184 CC lib/ftl/mngt/ftl_mngt.o 00:19:33.184 CC lib/iscsi/param.o 00:19:33.184 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:19:33.184 CC lib/iscsi/portal_grp.o 00:19:33.184 CC lib/iscsi/tgt_node.o 00:19:33.443 CC lib/iscsi/iscsi_subsystem.o 00:19:33.443 CC lib/iscsi/iscsi_rpc.o 00:19:33.443 CC lib/iscsi/task.o 00:19:33.443 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:19:33.443 CC lib/ftl/mngt/ftl_mngt_startup.o 00:19:33.701 CC lib/ftl/mngt/ftl_mngt_md.o 00:19:33.701 CC lib/ftl/mngt/ftl_mngt_misc.o 00:19:33.701 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:19:33.701 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:19:33.701 CC lib/ftl/mngt/ftl_mngt_band.o 00:19:33.960 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:19:33.960 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:19:33.960 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:19:33.960 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:19:33.960 CC lib/ftl/utils/ftl_conf.o 00:19:33.960 CC lib/ftl/utils/ftl_md.o 00:19:34.218 CC lib/ftl/utils/ftl_mempool.o 00:19:34.219 CC lib/ftl/utils/ftl_bitmap.o 00:19:34.219 LIB libspdk_vhost.a 00:19:34.219 CC lib/ftl/utils/ftl_property.o 00:19:34.219 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:19:34.219 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:19:34.219 SO libspdk_vhost.so.8.0 00:19:34.219 LIB libspdk_nvmf.a 00:19:34.219 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:19:34.219 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:19:34.477 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:19:34.477 SYMLINK libspdk_vhost.so 00:19:34.477 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:19:34.477 LIB libspdk_iscsi.a 00:19:34.477 SO libspdk_nvmf.so.18.0 00:19:34.477 CC lib/ftl/upgrade/ftl_sb_v3.o 00:19:34.477 CC lib/ftl/upgrade/ftl_sb_v5.o 00:19:34.477 CC lib/ftl/nvc/ftl_nvc_dev.o 00:19:34.477 SO libspdk_iscsi.so.8.0 00:19:34.477 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:19:34.477 CC lib/ftl/base/ftl_base_dev.o 00:19:34.477 CC lib/ftl/base/ftl_base_bdev.o 00:19:34.477 CC lib/ftl/ftl_trace.o 00:19:34.735 SYMLINK libspdk_nvmf.so 00:19:34.735 SYMLINK libspdk_iscsi.so 00:19:34.735 LIB libspdk_ftl.a 00:19:34.994 SO libspdk_ftl.so.9.0 00:19:35.560 SYMLINK libspdk_ftl.so 00:19:35.818 CC module/env_dpdk/env_dpdk_rpc.o 00:19:36.078 CC module/sock/posix/posix.o 00:19:36.078 CC module/keyring/file/keyring.o 00:19:36.078 CC module/accel/error/accel_error.o 00:19:36.078 CC module/scheduler/gscheduler/gscheduler.o 00:19:36.078 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:19:36.078 CC module/accel/dsa/accel_dsa.o 00:19:36.078 CC module/blob/bdev/blob_bdev.o 00:19:36.078 CC module/accel/ioat/accel_ioat.o 00:19:36.078 CC module/scheduler/dynamic/scheduler_dynamic.o 00:19:36.078 LIB libspdk_env_dpdk_rpc.a 00:19:36.078 SO libspdk_env_dpdk_rpc.so.6.0 00:19:36.337 CC module/keyring/file/keyring_rpc.o 00:19:36.337 LIB libspdk_scheduler_dpdk_governor.a 00:19:36.337 CC module/accel/error/accel_error_rpc.o 00:19:36.337 LIB libspdk_scheduler_gscheduler.a 00:19:36.337 SYMLINK libspdk_env_dpdk_rpc.so 00:19:36.337 CC module/accel/ioat/accel_ioat_rpc.o 00:19:36.337 SO libspdk_scheduler_dpdk_governor.so.4.0 00:19:36.337 SO libspdk_scheduler_gscheduler.so.4.0 00:19:36.337 CC module/accel/dsa/accel_dsa_rpc.o 00:19:36.337 SYMLINK libspdk_scheduler_dpdk_governor.so 00:19:36.337 LIB libspdk_scheduler_dynamic.a 00:19:36.337 SYMLINK libspdk_scheduler_gscheduler.so 00:19:36.337 LIB libspdk_blob_bdev.a 00:19:36.337 LIB libspdk_keyring_file.a 00:19:36.337 SO libspdk_scheduler_dynamic.so.4.0 00:19:36.337 SO libspdk_blob_bdev.so.11.0 00:19:36.337 LIB libspdk_accel_error.a 00:19:36.337 SO libspdk_keyring_file.so.1.0 00:19:36.337 LIB libspdk_accel_ioat.a 00:19:36.337 SYMLINK libspdk_scheduler_dynamic.so 00:19:36.337 SYMLINK libspdk_blob_bdev.so 00:19:36.337 SO libspdk_accel_error.so.2.0 00:19:36.337 SO libspdk_accel_ioat.so.6.0 00:19:36.626 LIB libspdk_accel_dsa.a 00:19:36.626 SYMLINK libspdk_keyring_file.so 00:19:36.626 SO libspdk_accel_dsa.so.5.0 00:19:36.626 SYMLINK libspdk_accel_ioat.so 00:19:36.626 CC module/accel/iaa/accel_iaa_rpc.o 00:19:36.626 CC module/accel/iaa/accel_iaa.o 00:19:36.626 SYMLINK libspdk_accel_error.so 00:19:36.626 SYMLINK libspdk_accel_dsa.so 00:19:36.626 CC module/bdev/delay/vbdev_delay.o 00:19:36.626 CC module/blobfs/bdev/blobfs_bdev.o 00:19:36.626 CC module/bdev/lvol/vbdev_lvol.o 00:19:36.626 CC module/bdev/gpt/gpt.o 00:19:36.626 CC module/bdev/malloc/bdev_malloc.o 00:19:36.884 CC module/bdev/error/vbdev_error.o 00:19:36.884 LIB libspdk_accel_iaa.a 00:19:36.884 CC module/bdev/null/bdev_null.o 00:19:36.884 SO libspdk_accel_iaa.so.3.0 00:19:36.884 LIB libspdk_sock_posix.a 00:19:36.884 CC module/bdev/nvme/bdev_nvme.o 00:19:36.884 SO libspdk_sock_posix.so.6.0 00:19:36.884 SYMLINK libspdk_accel_iaa.so 00:19:36.884 CC module/bdev/error/vbdev_error_rpc.o 00:19:36.884 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:19:36.884 CC module/bdev/gpt/vbdev_gpt.o 00:19:36.884 SYMLINK libspdk_sock_posix.so 00:19:36.884 CC module/bdev/null/bdev_null_rpc.o 00:19:37.142 LIB libspdk_blobfs_bdev.a 00:19:37.142 LIB libspdk_bdev_error.a 00:19:37.142 SO libspdk_blobfs_bdev.so.6.0 00:19:37.142 SO libspdk_bdev_error.so.6.0 00:19:37.142 CC module/bdev/malloc/bdev_malloc_rpc.o 00:19:37.142 CC module/bdev/delay/vbdev_delay_rpc.o 00:19:37.142 LIB libspdk_bdev_null.a 00:19:37.142 LIB libspdk_bdev_gpt.a 00:19:37.142 CC module/bdev/passthru/vbdev_passthru.o 00:19:37.143 SYMLINK libspdk_blobfs_bdev.so 00:19:37.400 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:19:37.400 SO libspdk_bdev_gpt.so.6.0 00:19:37.400 SO libspdk_bdev_null.so.6.0 00:19:37.400 SYMLINK libspdk_bdev_error.so 00:19:37.400 CC module/bdev/raid/bdev_raid.o 00:19:37.400 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:19:37.400 SYMLINK libspdk_bdev_null.so 00:19:37.400 SYMLINK libspdk_bdev_gpt.so 00:19:37.400 CC module/bdev/raid/bdev_raid_rpc.o 00:19:37.400 LIB libspdk_bdev_malloc.a 00:19:37.401 LIB libspdk_bdev_delay.a 00:19:37.401 SO libspdk_bdev_malloc.so.6.0 00:19:37.401 SO libspdk_bdev_delay.so.6.0 00:19:37.401 CC module/bdev/raid/bdev_raid_sb.o 00:19:37.401 SYMLINK libspdk_bdev_malloc.so 00:19:37.659 SYMLINK libspdk_bdev_delay.so 00:19:37.659 CC module/bdev/split/vbdev_split.o 00:19:37.659 CC module/bdev/zone_block/vbdev_zone_block.o 00:19:37.659 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:19:37.659 LIB libspdk_bdev_passthru.a 00:19:37.659 SO libspdk_bdev_passthru.so.6.0 00:19:37.659 CC module/bdev/aio/bdev_aio.o 00:19:37.659 LIB libspdk_bdev_lvol.a 00:19:37.659 CC module/bdev/ftl/bdev_ftl.o 00:19:37.659 SYMLINK libspdk_bdev_passthru.so 00:19:37.659 SO libspdk_bdev_lvol.so.6.0 00:19:37.659 CC module/bdev/ftl/bdev_ftl_rpc.o 00:19:37.659 CC module/bdev/aio/bdev_aio_rpc.o 00:19:37.917 CC module/bdev/nvme/bdev_nvme_rpc.o 00:19:37.917 SYMLINK libspdk_bdev_lvol.so 00:19:37.917 CC module/bdev/nvme/nvme_rpc.o 00:19:37.917 CC module/bdev/split/vbdev_split_rpc.o 00:19:37.917 LIB libspdk_bdev_zone_block.a 00:19:37.917 CC module/bdev/nvme/bdev_mdns_client.o 00:19:37.917 SO libspdk_bdev_zone_block.so.6.0 00:19:37.917 CC module/bdev/nvme/vbdev_opal.o 00:19:38.175 LIB libspdk_bdev_split.a 00:19:38.175 SO libspdk_bdev_split.so.6.0 00:19:38.175 LIB libspdk_bdev_aio.a 00:19:38.175 LIB libspdk_bdev_ftl.a 00:19:38.175 SYMLINK libspdk_bdev_zone_block.so 00:19:38.175 CC module/bdev/nvme/vbdev_opal_rpc.o 00:19:38.175 CC module/bdev/raid/raid0.o 00:19:38.175 SO libspdk_bdev_ftl.so.6.0 00:19:38.175 SO libspdk_bdev_aio.so.6.0 00:19:38.175 SYMLINK libspdk_bdev_split.so 00:19:38.175 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:19:38.175 SYMLINK libspdk_bdev_aio.so 00:19:38.175 SYMLINK libspdk_bdev_ftl.so 00:19:38.175 CC module/bdev/raid/raid1.o 00:19:38.434 CC module/bdev/raid/concat.o 00:19:38.434 CC module/bdev/iscsi/bdev_iscsi.o 00:19:38.434 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:19:38.692 CC module/bdev/virtio/bdev_virtio_scsi.o 00:19:38.692 CC module/bdev/virtio/bdev_virtio_blk.o 00:19:38.692 CC module/bdev/virtio/bdev_virtio_rpc.o 00:19:38.692 LIB libspdk_bdev_raid.a 00:19:38.692 SO libspdk_bdev_raid.so.6.0 00:19:38.951 SYMLINK libspdk_bdev_raid.so 00:19:38.951 LIB libspdk_bdev_iscsi.a 00:19:38.951 SO libspdk_bdev_iscsi.so.6.0 00:19:39.209 SYMLINK libspdk_bdev_iscsi.so 00:19:39.209 LIB libspdk_bdev_virtio.a 00:19:39.209 SO libspdk_bdev_virtio.so.6.0 00:19:39.209 SYMLINK libspdk_bdev_virtio.so 00:19:39.209 LIB libspdk_bdev_nvme.a 00:19:39.467 SO libspdk_bdev_nvme.so.7.0 00:19:39.467 SYMLINK libspdk_bdev_nvme.so 00:19:40.034 CC module/event/subsystems/keyring/keyring.o 00:19:40.034 CC module/event/subsystems/scheduler/scheduler.o 00:19:40.034 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:19:40.034 CC module/event/subsystems/iobuf/iobuf.o 00:19:40.034 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:19:40.034 CC module/event/subsystems/vmd/vmd.o 00:19:40.034 CC module/event/subsystems/sock/sock.o 00:19:40.034 CC module/event/subsystems/vmd/vmd_rpc.o 00:19:40.293 LIB libspdk_event_keyring.a 00:19:40.293 LIB libspdk_event_sock.a 00:19:40.293 LIB libspdk_event_vhost_blk.a 00:19:40.293 LIB libspdk_event_scheduler.a 00:19:40.293 LIB libspdk_event_vmd.a 00:19:40.293 SO libspdk_event_keyring.so.1.0 00:19:40.293 SO libspdk_event_sock.so.5.0 00:19:40.293 SO libspdk_event_vhost_blk.so.3.0 00:19:40.293 SO libspdk_event_vmd.so.6.0 00:19:40.293 SO libspdk_event_scheduler.so.4.0 00:19:40.293 LIB libspdk_event_iobuf.a 00:19:40.293 SO libspdk_event_iobuf.so.3.0 00:19:40.293 SYMLINK libspdk_event_sock.so 00:19:40.293 SYMLINK libspdk_event_keyring.so 00:19:40.293 SYMLINK libspdk_event_scheduler.so 00:19:40.293 SYMLINK libspdk_event_vhost_blk.so 00:19:40.293 SYMLINK libspdk_event_vmd.so 00:19:40.293 SYMLINK libspdk_event_iobuf.so 00:19:40.551 CC module/event/subsystems/accel/accel.o 00:19:40.817 LIB libspdk_event_accel.a 00:19:40.817 SO libspdk_event_accel.so.6.0 00:19:41.083 SYMLINK libspdk_event_accel.so 00:19:41.341 CC module/event/subsystems/bdev/bdev.o 00:19:41.341 LIB libspdk_event_bdev.a 00:19:41.599 SO libspdk_event_bdev.so.6.0 00:19:41.599 SYMLINK libspdk_event_bdev.so 00:19:41.858 CC module/event/subsystems/ublk/ublk.o 00:19:41.858 CC module/event/subsystems/scsi/scsi.o 00:19:41.858 CC module/event/subsystems/nbd/nbd.o 00:19:41.858 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:19:41.858 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:19:42.116 LIB libspdk_event_ublk.a 00:19:42.116 LIB libspdk_event_nbd.a 00:19:42.116 SO libspdk_event_ublk.so.3.0 00:19:42.116 SO libspdk_event_nbd.so.6.0 00:19:42.116 SYMLINK libspdk_event_ublk.so 00:19:42.116 SYMLINK libspdk_event_nbd.so 00:19:42.116 LIB libspdk_event_scsi.a 00:19:42.116 SO libspdk_event_scsi.so.6.0 00:19:42.375 LIB libspdk_event_nvmf.a 00:19:42.375 SYMLINK libspdk_event_scsi.so 00:19:42.375 SO libspdk_event_nvmf.so.6.0 00:19:42.375 SYMLINK libspdk_event_nvmf.so 00:19:42.634 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:19:42.634 CC module/event/subsystems/iscsi/iscsi.o 00:19:42.893 LIB libspdk_event_vhost_scsi.a 00:19:42.893 LIB libspdk_event_iscsi.a 00:19:42.893 SO libspdk_event_vhost_scsi.so.3.0 00:19:42.893 SO libspdk_event_iscsi.so.6.0 00:19:42.893 SYMLINK libspdk_event_vhost_scsi.so 00:19:42.893 SYMLINK libspdk_event_iscsi.so 00:19:43.152 SO libspdk.so.6.0 00:19:43.152 SYMLINK libspdk.so 00:19:43.411 CXX app/trace/trace.o 00:19:43.411 CC examples/vmd/lsvmd/lsvmd.o 00:19:43.411 CC examples/ioat/perf/perf.o 00:19:43.411 CC examples/sock/hello_world/hello_sock.o 00:19:43.411 CC examples/accel/perf/accel_perf.o 00:19:43.411 CC examples/nvme/hello_world/hello_world.o 00:19:43.411 CC examples/bdev/hello_world/hello_bdev.o 00:19:43.411 CC test/app/bdev_svc/bdev_svc.o 00:19:43.669 CC test/accel/dif/dif.o 00:19:43.669 CC examples/blob/hello_world/hello_blob.o 00:19:43.669 LINK lsvmd 00:19:43.669 LINK bdev_svc 00:19:43.669 LINK ioat_perf 00:19:43.669 LINK hello_sock 00:19:43.927 LINK hello_bdev 00:19:43.927 LINK hello_world 00:19:43.927 LINK hello_blob 00:19:43.927 LINK spdk_trace 00:19:43.927 CC examples/vmd/led/led.o 00:19:43.927 CC examples/ioat/verify/verify.o 00:19:43.927 LINK dif 00:19:43.927 LINK accel_perf 00:19:44.186 CC examples/nvme/reconnect/reconnect.o 00:19:44.186 LINK led 00:19:44.186 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:19:44.186 CC examples/blob/cli/blobcli.o 00:19:44.186 LINK verify 00:19:44.445 CC examples/bdev/bdevperf/bdevperf.o 00:19:44.445 CC app/trace_record/trace_record.o 00:19:44.445 CC test/bdev/bdevio/bdevio.o 00:19:44.445 CC examples/nvme/nvme_manage/nvme_manage.o 00:19:44.445 CC test/blobfs/mkfs/mkfs.o 00:19:44.703 CC examples/nvme/hotplug/hotplug.o 00:19:44.703 CC examples/nvme/arbitration/arbitration.o 00:19:44.703 LINK reconnect 00:19:44.703 LINK spdk_trace_record 00:19:44.703 LINK nvme_fuzz 00:19:44.703 LINK mkfs 00:19:44.703 LINK bdevio 00:19:44.703 LINK blobcli 00:19:44.963 LINK hotplug 00:19:44.963 LINK arbitration 00:19:44.963 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:19:44.963 CC examples/nvme/cmb_copy/cmb_copy.o 00:19:44.963 CC app/nvmf_tgt/nvmf_main.o 00:19:44.963 LINK nvme_manage 00:19:44.963 CC examples/nvme/abort/abort.o 00:19:44.963 CC test/app/histogram_perf/histogram_perf.o 00:19:45.222 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:19:45.222 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:19:45.222 LINK cmb_copy 00:19:45.222 TEST_HEADER include/spdk/accel.h 00:19:45.222 TEST_HEADER include/spdk/accel_module.h 00:19:45.222 TEST_HEADER include/spdk/assert.h 00:19:45.222 TEST_HEADER include/spdk/barrier.h 00:19:45.222 TEST_HEADER include/spdk/base64.h 00:19:45.222 TEST_HEADER include/spdk/bdev.h 00:19:45.222 TEST_HEADER include/spdk/bdev_module.h 00:19:45.222 TEST_HEADER include/spdk/bdev_zone.h 00:19:45.222 TEST_HEADER include/spdk/bit_array.h 00:19:45.222 TEST_HEADER include/spdk/bit_pool.h 00:19:45.222 LINK bdevperf 00:19:45.222 TEST_HEADER include/spdk/blob_bdev.h 00:19:45.222 TEST_HEADER include/spdk/blobfs_bdev.h 00:19:45.222 TEST_HEADER include/spdk/blobfs.h 00:19:45.222 TEST_HEADER include/spdk/blob.h 00:19:45.222 TEST_HEADER include/spdk/conf.h 00:19:45.222 TEST_HEADER include/spdk/config.h 00:19:45.222 TEST_HEADER include/spdk/cpuset.h 00:19:45.222 LINK histogram_perf 00:19:45.222 TEST_HEADER include/spdk/crc16.h 00:19:45.222 TEST_HEADER include/spdk/crc32.h 00:19:45.222 TEST_HEADER include/spdk/crc64.h 00:19:45.222 TEST_HEADER include/spdk/dif.h 00:19:45.222 TEST_HEADER include/spdk/dma.h 00:19:45.222 TEST_HEADER include/spdk/endian.h 00:19:45.222 LINK nvmf_tgt 00:19:45.222 TEST_HEADER include/spdk/env_dpdk.h 00:19:45.222 TEST_HEADER include/spdk/env.h 00:19:45.222 TEST_HEADER include/spdk/event.h 00:19:45.222 TEST_HEADER include/spdk/fd_group.h 00:19:45.222 TEST_HEADER include/spdk/fd.h 00:19:45.222 TEST_HEADER include/spdk/file.h 00:19:45.222 TEST_HEADER include/spdk/ftl.h 00:19:45.222 TEST_HEADER include/spdk/gpt_spec.h 00:19:45.222 TEST_HEADER include/spdk/hexlify.h 00:19:45.222 TEST_HEADER include/spdk/histogram_data.h 00:19:45.222 TEST_HEADER include/spdk/idxd.h 00:19:45.222 TEST_HEADER include/spdk/idxd_spec.h 00:19:45.222 TEST_HEADER include/spdk/init.h 00:19:45.222 TEST_HEADER include/spdk/ioat.h 00:19:45.222 TEST_HEADER include/spdk/ioat_spec.h 00:19:45.222 TEST_HEADER include/spdk/iscsi_spec.h 00:19:45.222 TEST_HEADER include/spdk/json.h 00:19:45.222 TEST_HEADER include/spdk/jsonrpc.h 00:19:45.222 TEST_HEADER include/spdk/keyring.h 00:19:45.222 TEST_HEADER include/spdk/keyring_module.h 00:19:45.222 TEST_HEADER include/spdk/likely.h 00:19:45.222 TEST_HEADER include/spdk/log.h 00:19:45.222 TEST_HEADER include/spdk/lvol.h 00:19:45.222 TEST_HEADER include/spdk/memory.h 00:19:45.222 TEST_HEADER include/spdk/mmio.h 00:19:45.222 TEST_HEADER include/spdk/nbd.h 00:19:45.222 TEST_HEADER include/spdk/notify.h 00:19:45.222 TEST_HEADER include/spdk/nvme.h 00:19:45.222 LINK pmr_persistence 00:19:45.222 TEST_HEADER include/spdk/nvme_intel.h 00:19:45.222 TEST_HEADER include/spdk/nvme_ocssd.h 00:19:45.222 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:19:45.222 TEST_HEADER include/spdk/nvme_spec.h 00:19:45.222 TEST_HEADER include/spdk/nvme_zns.h 00:19:45.222 TEST_HEADER include/spdk/nvmf_cmd.h 00:19:45.222 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:19:45.222 TEST_HEADER include/spdk/nvmf.h 00:19:45.222 TEST_HEADER include/spdk/nvmf_spec.h 00:19:45.222 TEST_HEADER include/spdk/nvmf_transport.h 00:19:45.222 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:19:45.481 TEST_HEADER include/spdk/opal.h 00:19:45.481 TEST_HEADER include/spdk/opal_spec.h 00:19:45.481 TEST_HEADER include/spdk/pci_ids.h 00:19:45.481 TEST_HEADER include/spdk/pipe.h 00:19:45.481 TEST_HEADER include/spdk/queue.h 00:19:45.481 TEST_HEADER include/spdk/reduce.h 00:19:45.481 TEST_HEADER include/spdk/rpc.h 00:19:45.481 TEST_HEADER include/spdk/scheduler.h 00:19:45.481 TEST_HEADER include/spdk/scsi.h 00:19:45.481 TEST_HEADER include/spdk/scsi_spec.h 00:19:45.481 TEST_HEADER include/spdk/sock.h 00:19:45.481 TEST_HEADER include/spdk/stdinc.h 00:19:45.481 TEST_HEADER include/spdk/string.h 00:19:45.481 TEST_HEADER include/spdk/thread.h 00:19:45.481 TEST_HEADER include/spdk/trace.h 00:19:45.481 TEST_HEADER include/spdk/trace_parser.h 00:19:45.481 TEST_HEADER include/spdk/tree.h 00:19:45.481 TEST_HEADER include/spdk/ublk.h 00:19:45.481 TEST_HEADER include/spdk/util.h 00:19:45.481 TEST_HEADER include/spdk/uuid.h 00:19:45.481 TEST_HEADER include/spdk/version.h 00:19:45.481 TEST_HEADER include/spdk/vfio_user_pci.h 00:19:45.481 TEST_HEADER include/spdk/vfio_user_spec.h 00:19:45.481 TEST_HEADER include/spdk/vhost.h 00:19:45.481 TEST_HEADER include/spdk/vmd.h 00:19:45.481 TEST_HEADER include/spdk/xor.h 00:19:45.481 TEST_HEADER include/spdk/zipf.h 00:19:45.481 CXX test/cpp_headers/accel.o 00:19:45.481 CC test/app/jsoncat/jsoncat.o 00:19:45.481 CC test/dma/test_dma/test_dma.o 00:19:45.481 CXX test/cpp_headers/accel_module.o 00:19:45.481 LINK abort 00:19:45.481 CC test/app/stub/stub.o 00:19:45.481 LINK jsoncat 00:19:45.739 CC app/iscsi_tgt/iscsi_tgt.o 00:19:45.739 CXX test/cpp_headers/assert.o 00:19:45.739 LINK stub 00:19:45.739 CXX test/cpp_headers/barrier.o 00:19:45.739 CC app/spdk_tgt/spdk_tgt.o 00:19:45.739 LINK vhost_fuzz 00:19:45.739 CC test/env/mem_callbacks/mem_callbacks.o 00:19:45.998 LINK iscsi_tgt 00:19:45.998 LINK test_dma 00:19:45.998 CC examples/nvmf/nvmf/nvmf.o 00:19:45.998 CXX test/cpp_headers/base64.o 00:19:45.998 LINK spdk_tgt 00:19:45.998 CC test/env/vtophys/vtophys.o 00:19:45.998 CC test/event/event_perf/event_perf.o 00:19:45.998 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:19:46.256 CXX test/cpp_headers/bdev.o 00:19:46.256 LINK vtophys 00:19:46.256 CC test/env/pci/pci_ut.o 00:19:46.256 CC test/env/memory/memory_ut.o 00:19:46.256 LINK env_dpdk_post_init 00:19:46.256 LINK nvmf 00:19:46.256 LINK event_perf 00:19:46.515 CC app/spdk_lspci/spdk_lspci.o 00:19:46.515 CXX test/cpp_headers/bdev_module.o 00:19:46.515 LINK mem_callbacks 00:19:46.515 CC app/spdk_nvme_perf/perf.o 00:19:46.515 LINK spdk_lspci 00:19:46.515 CC app/spdk_nvme_identify/identify.o 00:19:46.515 CC test/event/reactor/reactor.o 00:19:46.515 LINK iscsi_fuzz 00:19:46.515 CXX test/cpp_headers/bdev_zone.o 00:19:46.515 LINK pci_ut 00:19:46.774 CC examples/util/zipf/zipf.o 00:19:46.774 LINK reactor 00:19:46.774 CXX test/cpp_headers/bit_array.o 00:19:46.774 CC examples/thread/thread/thread_ex.o 00:19:46.774 LINK zipf 00:19:47.033 CC app/spdk_nvme_discover/discovery_aer.o 00:19:47.033 CXX test/cpp_headers/bit_pool.o 00:19:47.033 CC test/event/reactor_perf/reactor_perf.o 00:19:47.033 CC app/spdk_top/spdk_top.o 00:19:47.033 LINK thread 00:19:47.033 CC app/vhost/vhost.o 00:19:47.033 CXX test/cpp_headers/blob_bdev.o 00:19:47.033 LINK spdk_nvme_discover 00:19:47.033 LINK reactor_perf 00:19:47.291 LINK memory_ut 00:19:47.291 CC examples/idxd/perf/perf.o 00:19:47.291 LINK vhost 00:19:47.291 CXX test/cpp_headers/blobfs_bdev.o 00:19:47.291 LINK spdk_nvme_perf 00:19:47.291 LINK spdk_nvme_identify 00:19:47.291 CC test/event/app_repeat/app_repeat.o 00:19:47.550 CC test/event/scheduler/scheduler.o 00:19:47.550 CXX test/cpp_headers/blobfs.o 00:19:47.550 CXX test/cpp_headers/blob.o 00:19:47.550 CXX test/cpp_headers/conf.o 00:19:47.550 CXX test/cpp_headers/config.o 00:19:47.550 CXX test/cpp_headers/cpuset.o 00:19:47.550 LINK app_repeat 00:19:47.550 LINK idxd_perf 00:19:47.550 CC test/lvol/esnap/esnap.o 00:19:47.550 CXX test/cpp_headers/crc16.o 00:19:47.550 CXX test/cpp_headers/crc32.o 00:19:47.809 LINK scheduler 00:19:47.809 CC test/nvme/aer/aer.o 00:19:47.809 CC test/rpc_client/rpc_client_test.o 00:19:47.809 CXX test/cpp_headers/crc64.o 00:19:47.809 CC examples/interrupt_tgt/interrupt_tgt.o 00:19:48.067 CC test/thread/poller_perf/poller_perf.o 00:19:48.067 CC app/spdk_dd/spdk_dd.o 00:19:48.067 LINK spdk_top 00:19:48.067 CXX test/cpp_headers/dif.o 00:19:48.067 LINK rpc_client_test 00:19:48.067 CC test/nvme/reset/reset.o 00:19:48.067 LINK aer 00:19:48.067 LINK interrupt_tgt 00:19:48.067 LINK poller_perf 00:19:48.067 CXX test/cpp_headers/dma.o 00:19:48.327 CC test/nvme/sgl/sgl.o 00:19:48.327 LINK reset 00:19:48.327 CC test/nvme/e2edp/nvme_dp.o 00:19:48.327 CXX test/cpp_headers/endian.o 00:19:48.327 CXX test/cpp_headers/env_dpdk.o 00:19:48.327 CC app/fio/nvme/fio_plugin.o 00:19:48.327 LINK spdk_dd 00:19:48.614 CC app/fio/bdev/fio_plugin.o 00:19:48.614 CXX test/cpp_headers/env.o 00:19:48.614 CXX test/cpp_headers/event.o 00:19:48.614 LINK sgl 00:19:48.614 CC test/nvme/overhead/overhead.o 00:19:48.614 LINK nvme_dp 00:19:48.614 CC test/nvme/err_injection/err_injection.o 00:19:48.614 CXX test/cpp_headers/fd_group.o 00:19:48.874 CC test/nvme/startup/startup.o 00:19:48.874 CC test/nvme/reserve/reserve.o 00:19:48.874 LINK err_injection 00:19:48.874 CXX test/cpp_headers/fd.o 00:19:48.874 CC test/nvme/simple_copy/simple_copy.o 00:19:48.874 LINK overhead 00:19:48.874 LINK spdk_nvme 00:19:48.874 LINK startup 00:19:48.874 LINK spdk_bdev 00:19:49.133 CXX test/cpp_headers/file.o 00:19:49.133 LINK reserve 00:19:49.133 CC test/nvme/connect_stress/connect_stress.o 00:19:49.133 CXX test/cpp_headers/ftl.o 00:19:49.133 LINK simple_copy 00:19:49.133 CC test/nvme/boot_partition/boot_partition.o 00:19:49.391 CC test/nvme/compliance/nvme_compliance.o 00:19:49.391 CC test/nvme/fused_ordering/fused_ordering.o 00:19:49.391 CC test/nvme/doorbell_aers/doorbell_aers.o 00:19:49.391 CXX test/cpp_headers/gpt_spec.o 00:19:49.391 LINK connect_stress 00:19:49.391 CC test/nvme/fdp/fdp.o 00:19:49.391 LINK boot_partition 00:19:49.391 CC test/nvme/cuse/cuse.o 00:19:49.661 LINK fused_ordering 00:19:49.661 CXX test/cpp_headers/hexlify.o 00:19:49.661 CXX test/cpp_headers/histogram_data.o 00:19:49.661 CXX test/cpp_headers/idxd.o 00:19:49.661 LINK doorbell_aers 00:19:49.661 LINK nvme_compliance 00:19:49.661 CXX test/cpp_headers/idxd_spec.o 00:19:49.661 CXX test/cpp_headers/init.o 00:19:49.661 CXX test/cpp_headers/ioat.o 00:19:49.661 LINK fdp 00:19:49.661 CXX test/cpp_headers/ioat_spec.o 00:19:49.661 CXX test/cpp_headers/iscsi_spec.o 00:19:49.919 CXX test/cpp_headers/json.o 00:19:49.919 CXX test/cpp_headers/jsonrpc.o 00:19:49.919 CXX test/cpp_headers/keyring.o 00:19:49.919 CXX test/cpp_headers/keyring_module.o 00:19:49.919 CXX test/cpp_headers/likely.o 00:19:49.919 CXX test/cpp_headers/log.o 00:19:49.919 CXX test/cpp_headers/lvol.o 00:19:49.919 CXX test/cpp_headers/memory.o 00:19:50.178 CXX test/cpp_headers/mmio.o 00:19:50.178 CXX test/cpp_headers/nbd.o 00:19:50.178 CXX test/cpp_headers/notify.o 00:19:50.178 CXX test/cpp_headers/nvme.o 00:19:50.178 CXX test/cpp_headers/nvme_intel.o 00:19:50.178 CXX test/cpp_headers/nvme_ocssd.o 00:19:50.178 CXX test/cpp_headers/nvme_ocssd_spec.o 00:19:50.178 CXX test/cpp_headers/nvme_spec.o 00:19:50.178 CXX test/cpp_headers/nvme_zns.o 00:19:50.178 CXX test/cpp_headers/nvmf_cmd.o 00:19:50.178 CXX test/cpp_headers/nvmf_fc_spec.o 00:19:50.436 CXX test/cpp_headers/nvmf.o 00:19:50.436 CXX test/cpp_headers/nvmf_spec.o 00:19:50.436 CXX test/cpp_headers/nvmf_transport.o 00:19:50.436 CXX test/cpp_headers/opal.o 00:19:50.436 CXX test/cpp_headers/opal_spec.o 00:19:50.436 CXX test/cpp_headers/pci_ids.o 00:19:50.695 CXX test/cpp_headers/pipe.o 00:19:50.695 CXX test/cpp_headers/queue.o 00:19:50.695 LINK cuse 00:19:50.695 CXX test/cpp_headers/reduce.o 00:19:50.695 CXX test/cpp_headers/rpc.o 00:19:50.695 CXX test/cpp_headers/scheduler.o 00:19:50.695 CXX test/cpp_headers/scsi.o 00:19:50.695 CXX test/cpp_headers/scsi_spec.o 00:19:50.695 CXX test/cpp_headers/sock.o 00:19:50.954 CXX test/cpp_headers/stdinc.o 00:19:50.954 CXX test/cpp_headers/string.o 00:19:50.954 CXX test/cpp_headers/thread.o 00:19:50.954 CXX test/cpp_headers/trace.o 00:19:50.954 CXX test/cpp_headers/trace_parser.o 00:19:50.954 CXX test/cpp_headers/tree.o 00:19:50.954 CXX test/cpp_headers/ublk.o 00:19:50.954 CXX test/cpp_headers/util.o 00:19:50.954 CXX test/cpp_headers/uuid.o 00:19:50.954 CXX test/cpp_headers/version.o 00:19:51.216 CXX test/cpp_headers/vfio_user_pci.o 00:19:51.216 CXX test/cpp_headers/vfio_user_spec.o 00:19:51.216 CXX test/cpp_headers/vhost.o 00:19:51.216 CXX test/cpp_headers/vmd.o 00:19:51.216 CXX test/cpp_headers/xor.o 00:19:51.216 CXX test/cpp_headers/zipf.o 00:19:52.625 LINK esnap 00:19:55.179 ************************************ 00:19:55.179 END TEST make 00:19:55.179 ************************************ 00:19:55.179 00:19:55.179 real 1m7.076s 00:19:55.179 user 6m9.629s 00:19:55.179 sys 1m22.638s 00:19:55.179 00:45:58 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:19:55.179 00:45:58 make -- common/autotest_common.sh@10 -- $ set +x 00:19:55.179 00:45:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:19:55.179 00:45:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:19:55.179 00:45:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:19:55.179 00:45:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:55.179 00:45:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:19:55.179 00:45:58 -- pm/common@44 -- $ pid=5871 00:19:55.179 00:45:58 -- pm/common@50 -- $ kill -TERM 5871 00:19:55.179 00:45:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:55.179 00:45:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:19:55.179 00:45:58 -- pm/common@44 -- $ pid=5872 00:19:55.179 00:45:58 -- pm/common@50 -- $ kill -TERM 5872 00:19:55.179 00:45:58 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:55.179 00:45:58 -- nvmf/common.sh@7 -- # uname -s 00:19:55.179 00:45:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.179 00:45:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.179 00:45:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.179 00:45:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.180 00:45:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.180 00:45:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.180 00:45:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.180 00:45:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.180 00:45:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.180 00:45:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.180 00:45:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:19:55.180 00:45:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:19:55.180 00:45:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.180 00:45:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.180 00:45:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:55.180 00:45:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.180 00:45:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:55.180 00:45:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.180 00:45:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.180 00:45:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.180 00:45:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.180 00:45:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.180 00:45:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.180 00:45:58 -- paths/export.sh@5 -- # export PATH 00:19:55.180 00:45:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.180 00:45:58 -- nvmf/common.sh@47 -- # : 0 00:19:55.180 00:45:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:55.180 00:45:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:55.180 00:45:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.180 00:45:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.180 00:45:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.180 00:45:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:55.180 00:45:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:55.180 00:45:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:55.180 00:45:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:19:55.180 00:45:58 -- spdk/autotest.sh@32 -- # uname -s 00:19:55.180 00:45:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:19:55.180 00:45:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:19:55.180 00:45:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:19:55.180 00:45:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:19:55.180 00:45:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:19:55.180 00:45:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:19:55.180 00:45:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:19:55.180 00:45:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:19:55.180 00:45:58 -- spdk/autotest.sh@48 -- # udevadm_pid=67090 00:19:55.180 00:45:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:19:55.180 00:45:58 -- pm/common@17 -- # local monitor 00:19:55.180 00:45:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:19:55.180 00:45:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:19:55.180 00:45:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:19:55.180 00:45:58 -- pm/common@25 -- # sleep 1 00:19:55.180 00:45:58 -- pm/common@21 -- # date +%s 00:19:55.180 00:45:58 -- pm/common@21 -- # date +%s 00:19:55.180 00:45:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715733958 00:19:55.180 00:45:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715733958 00:19:55.180 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715733958_collect-vmstat.pm.log 00:19:55.439 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715733958_collect-cpu-load.pm.log 00:19:56.376 00:45:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:19:56.376 00:45:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:19:56.376 00:45:59 -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:56.376 00:45:59 -- common/autotest_common.sh@10 -- # set +x 00:19:56.376 00:45:59 -- spdk/autotest.sh@59 -- # create_test_list 00:19:56.376 00:45:59 -- common/autotest_common.sh@745 -- # xtrace_disable 00:19:56.376 00:45:59 -- common/autotest_common.sh@10 -- # set +x 00:19:56.376 00:45:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:19:56.376 00:45:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:19:56.376 00:45:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:19:56.376 00:45:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:19:56.376 00:45:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:19:56.376 00:45:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:19:56.376 00:45:59 -- common/autotest_common.sh@1452 -- # uname 00:19:56.376 00:45:59 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:19:56.376 00:45:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:19:56.376 00:45:59 -- common/autotest_common.sh@1472 -- # uname 00:19:56.376 00:45:59 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:19:56.376 00:45:59 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:19:56.376 00:45:59 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:19:56.376 00:45:59 -- spdk/autotest.sh@72 -- # hash lcov 00:19:56.376 00:45:59 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:19:56.376 00:45:59 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:19:56.376 --rc lcov_branch_coverage=1 00:19:56.376 --rc lcov_function_coverage=1 00:19:56.376 --rc genhtml_branch_coverage=1 00:19:56.376 --rc genhtml_function_coverage=1 00:19:56.376 --rc genhtml_legend=1 00:19:56.376 --rc geninfo_all_blocks=1 00:19:56.376 ' 00:19:56.376 00:45:59 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:19:56.376 --rc lcov_branch_coverage=1 00:19:56.376 --rc lcov_function_coverage=1 00:19:56.376 --rc genhtml_branch_coverage=1 00:19:56.376 --rc genhtml_function_coverage=1 00:19:56.376 --rc genhtml_legend=1 00:19:56.376 --rc geninfo_all_blocks=1 00:19:56.376 ' 00:19:56.376 00:45:59 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:19:56.376 --rc lcov_branch_coverage=1 00:19:56.376 --rc lcov_function_coverage=1 00:19:56.376 --rc genhtml_branch_coverage=1 00:19:56.376 --rc genhtml_function_coverage=1 00:19:56.376 --rc genhtml_legend=1 00:19:56.376 --rc geninfo_all_blocks=1 00:19:56.376 --no-external' 00:19:56.376 00:45:59 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:19:56.376 --rc lcov_branch_coverage=1 00:19:56.376 --rc lcov_function_coverage=1 00:19:56.376 --rc genhtml_branch_coverage=1 00:19:56.376 --rc genhtml_function_coverage=1 00:19:56.376 --rc genhtml_legend=1 00:19:56.376 --rc geninfo_all_blocks=1 00:19:56.376 --no-external' 00:19:56.376 00:45:59 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:19:56.376 lcov: LCOV version 1.14 00:19:56.376 00:45:59 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:20:06.445 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:20:06.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:20:06.445 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:20:06.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:20:06.445 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:20:06.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:20:11.716 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:20:11.716 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:20:26.647 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:20:26.647 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:20:26.648 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:20:26.648 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:20:28.553 00:46:31 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:20:28.553 00:46:31 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:28.553 00:46:31 -- common/autotest_common.sh@10 -- # set +x 00:20:28.553 00:46:31 -- spdk/autotest.sh@91 -- # rm -f 00:20:28.553 00:46:31 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:29.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:29.380 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:29.380 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:29.380 00:46:32 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:20:29.380 00:46:32 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:20:29.380 00:46:32 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:20:29.380 00:46:32 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:20:29.380 00:46:32 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:20:29.380 00:46:32 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:20:29.380 00:46:32 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:20:29.380 00:46:32 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:29.380 00:46:32 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:20:29.380 00:46:32 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:20:29.380 00:46:32 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:20:29.380 00:46:32 -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:20:29.380 00:46:32 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:29.380 00:46:32 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:20:29.380 00:46:32 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:20:29.380 00:46:32 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n2 00:20:29.380 00:46:32 -- common/autotest_common.sh@1659 -- # local device=nvme1n2 00:20:29.380 00:46:32 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:20:29.380 00:46:32 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:20:29.380 00:46:32 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:20:29.380 00:46:32 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n3 00:20:29.380 00:46:32 -- common/autotest_common.sh@1659 -- # local device=nvme1n3 00:20:29.380 00:46:32 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:20:29.380 00:46:32 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:20:29.380 00:46:32 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:20:29.380 00:46:32 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:20:29.380 00:46:32 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:20:29.380 00:46:32 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:20:29.380 00:46:32 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:20:29.380 00:46:32 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:20:29.380 No valid GPT data, bailing 00:20:29.380 00:46:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:29.380 00:46:32 -- scripts/common.sh@391 -- # pt= 00:20:29.380 00:46:32 -- scripts/common.sh@392 -- # return 1 00:20:29.380 00:46:32 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:20:29.380 1+0 records in 00:20:29.380 1+0 records out 00:20:29.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00560955 s, 187 MB/s 00:20:29.380 00:46:32 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:20:29.380 00:46:32 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:20:29.380 00:46:32 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:20:29.380 00:46:32 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:20:29.380 00:46:32 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:20:29.380 No valid GPT data, bailing 00:20:29.380 00:46:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:29.380 00:46:32 -- scripts/common.sh@391 -- # pt= 00:20:29.380 00:46:32 -- scripts/common.sh@392 -- # return 1 00:20:29.380 00:46:32 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:20:29.639 1+0 records in 00:20:29.639 1+0 records out 00:20:29.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00535812 s, 196 MB/s 00:20:29.639 00:46:32 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:20:29.639 00:46:32 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:20:29.639 00:46:32 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:20:29.639 00:46:32 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:20:29.639 00:46:32 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:20:29.639 No valid GPT data, bailing 00:20:29.639 00:46:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:20:29.639 00:46:32 -- scripts/common.sh@391 -- # pt= 00:20:29.639 00:46:32 -- scripts/common.sh@392 -- # return 1 00:20:29.639 00:46:32 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:20:29.639 1+0 records in 00:20:29.639 1+0 records out 00:20:29.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504094 s, 208 MB/s 00:20:29.639 00:46:32 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:20:29.639 00:46:32 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:20:29.639 00:46:32 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:20:29.639 00:46:32 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:20:29.639 00:46:32 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:20:29.639 No valid GPT data, bailing 00:20:29.639 00:46:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:20:29.639 00:46:32 -- scripts/common.sh@391 -- # pt= 00:20:29.639 00:46:32 -- scripts/common.sh@392 -- # return 1 00:20:29.639 00:46:32 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:20:29.639 1+0 records in 00:20:29.639 1+0 records out 00:20:29.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00561141 s, 187 MB/s 00:20:29.639 00:46:32 -- spdk/autotest.sh@118 -- # sync 00:20:29.639 00:46:32 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:20:29.639 00:46:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:20:29.639 00:46:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:20:32.175 00:46:34 -- spdk/autotest.sh@124 -- # uname -s 00:20:32.175 00:46:34 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:20:32.175 00:46:34 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:20:32.175 00:46:34 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:32.175 00:46:34 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:32.175 00:46:34 -- common/autotest_common.sh@10 -- # set +x 00:20:32.175 ************************************ 00:20:32.175 START TEST setup.sh 00:20:32.175 ************************************ 00:20:32.175 00:46:34 setup.sh -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:20:32.175 * Looking for test storage... 00:20:32.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:20:32.175 00:46:34 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:20:32.175 00:46:34 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:20:32.175 00:46:34 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:20:32.175 00:46:34 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:32.175 00:46:34 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:32.175 00:46:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:20:32.175 ************************************ 00:20:32.175 START TEST acl 00:20:32.175 ************************************ 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:20:32.175 * Looking for test storage... 00:20:32.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:20:32.175 00:46:35 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n2 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme1n2 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n3 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme1n3 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:20:32.175 00:46:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:20:32.175 00:46:35 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:20:32.175 00:46:35 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:20:32.175 00:46:35 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:20:32.175 00:46:35 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:20:32.175 00:46:35 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:20:32.175 00:46:35 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:20:32.175 00:46:35 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:32.743 00:46:35 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:20:32.743 00:46:35 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:20:32.743 00:46:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:32.743 00:46:35 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:20:32.743 00:46:35 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:20:32.743 00:46:35 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:20:33.366 00:46:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:20:33.366 00:46:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:20:33.366 00:46:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:33.366 Hugepages 00:20:33.366 node hugesize free / total 00:20:33.366 00:46:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:20:33.366 00:46:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:20:33.366 00:46:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:33.366 00:20:33.366 Type BDF Vendor Device NUMA Driver Device Block devices 00:20:33.366 00:46:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:20:33.366 00:46:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:20:33.366 00:46:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:33.366 00:46:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:20:33.366 00:46:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:20:33.366 00:46:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:33.366 00:46:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:33.625 00:46:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:20:33.626 00:46:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:20:33.626 00:46:36 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:20:33.626 00:46:36 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:20:33.626 00:46:36 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:20:33.626 00:46:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:33.626 00:46:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:20:33.626 00:46:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:20:33.626 00:46:36 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:20:33.626 00:46:36 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:20:33.626 00:46:36 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:20:33.626 00:46:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:33.626 00:46:36 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:20:33.626 00:46:36 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:20:33.626 00:46:36 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:33.626 00:46:36 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:33.626 00:46:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:20:33.626 ************************************ 00:20:33.626 START TEST denied 00:20:33.626 ************************************ 00:20:33.626 00:46:36 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:20:33.626 00:46:36 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:20:33.626 00:46:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:20:33.626 00:46:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:20:33.626 00:46:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:20:33.626 00:46:36 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:20:34.563 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:20:34.563 00:46:37 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:20:34.563 00:46:37 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:20:34.563 00:46:37 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:20:34.563 00:46:37 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:20:34.563 00:46:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:20:34.563 00:46:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:20:34.563 00:46:37 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:20:34.563 00:46:37 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:20:34.563 00:46:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:20:34.563 00:46:37 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:35.131 00:20:35.131 real 0m1.481s 00:20:35.131 user 0m0.604s 00:20:35.131 sys 0m0.829s 00:20:35.131 00:46:38 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:35.131 00:46:38 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:20:35.131 ************************************ 00:20:35.131 END TEST denied 00:20:35.131 ************************************ 00:20:35.131 00:46:38 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:20:35.131 00:46:38 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:35.131 00:46:38 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:35.131 00:46:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:20:35.131 ************************************ 00:20:35.131 START TEST allowed 00:20:35.131 ************************************ 00:20:35.131 00:46:38 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:20:35.131 00:46:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:20:35.131 00:46:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:20:35.131 00:46:38 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:20:35.131 00:46:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:20:35.131 00:46:38 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:20:36.067 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:36.067 00:46:39 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:20:36.067 00:46:39 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:20:36.067 00:46:39 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:20:36.067 00:46:39 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:20:36.067 00:46:39 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:20:36.067 00:46:39 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:20:36.067 00:46:39 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:20:36.067 00:46:39 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:20:36.067 00:46:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:20:36.067 00:46:39 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:36.636 00:20:36.636 real 0m1.580s 00:20:36.636 user 0m0.683s 00:20:36.636 sys 0m0.899s 00:20:36.636 00:46:39 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:36.636 00:46:39 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:20:36.636 ************************************ 00:20:36.636 END TEST allowed 00:20:36.636 ************************************ 00:20:36.912 00:20:36.912 real 0m4.921s 00:20:36.912 user 0m2.151s 00:20:36.912 sys 0m2.728s 00:20:36.912 00:46:39 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:36.912 00:46:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:20:36.912 ************************************ 00:20:36.912 END TEST acl 00:20:36.912 ************************************ 00:20:36.912 00:46:39 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:20:36.912 00:46:39 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:36.912 00:46:39 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:36.912 00:46:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:20:36.912 ************************************ 00:20:36.912 START TEST hugepages 00:20:36.912 ************************************ 00:20:36.912 00:46:39 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:20:36.912 * Looking for test storage... 00:20:36.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.912 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4030036 kB' 'MemAvailable: 7401252 kB' 'Buffers: 2436 kB' 'Cached: 3570624 kB' 'SwapCached: 0 kB' 'Active: 873656 kB' 'Inactive: 2803308 kB' 'Active(anon): 114396 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803308 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 105732 kB' 'Mapped: 48748 kB' 'Shmem: 10492 kB' 'KReclaimable: 91620 kB' 'Slab: 172792 kB' 'SReclaimable: 91620 kB' 'SUnreclaim: 81172 kB' 'KernelStack: 6736 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 335272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.913 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:20:36.914 00:46:40 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:20:36.914 00:46:40 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:36.914 00:46:40 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:36.914 00:46:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:20:36.914 ************************************ 00:20:36.914 START TEST default_setup 00:20:36.914 ************************************ 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:20:36.914 00:46:40 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:37.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:37.856 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.856 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6116152 kB' 'MemAvailable: 9487240 kB' 'Buffers: 2436 kB' 'Cached: 3570616 kB' 'SwapCached: 0 kB' 'Active: 890520 kB' 'Inactive: 2803312 kB' 'Active(anon): 131260 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803312 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122412 kB' 'Mapped: 48880 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172484 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81124 kB' 'KernelStack: 6640 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.856 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.857 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6115904 kB' 'MemAvailable: 9486992 kB' 'Buffers: 2436 kB' 'Cached: 3570616 kB' 'SwapCached: 0 kB' 'Active: 890536 kB' 'Inactive: 2803312 kB' 'Active(anon): 131276 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803312 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122380 kB' 'Mapped: 48880 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172484 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81124 kB' 'KernelStack: 6592 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.858 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:20:37.859 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6115904 kB' 'MemAvailable: 9487004 kB' 'Buffers: 2436 kB' 'Cached: 3570616 kB' 'SwapCached: 0 kB' 'Active: 889956 kB' 'Inactive: 2803324 kB' 'Active(anon): 130696 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121824 kB' 'Mapped: 48792 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172484 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81124 kB' 'KernelStack: 6624 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.860 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:37.861 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:20:37.862 nr_hugepages=1024 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:20:37.862 resv_hugepages=0 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:20:37.862 surplus_hugepages=0 00:20:37.862 anon_hugepages=0 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6115652 kB' 'MemAvailable: 9486752 kB' 'Buffers: 2436 kB' 'Cached: 3570616 kB' 'SwapCached: 0 kB' 'Active: 890152 kB' 'Inactive: 2803324 kB' 'Active(anon): 130892 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122024 kB' 'Mapped: 48792 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172484 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81124 kB' 'KernelStack: 6608 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:37.862 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.149 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6115652 kB' 'MemUsed: 6126320 kB' 'SwapCached: 0 kB' 'Active: 889956 kB' 'Inactive: 2803324 kB' 'Active(anon): 130696 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 3573052 kB' 'Mapped: 48792 kB' 'AnonPages: 121840 kB' 'Shmem: 10468 kB' 'KernelStack: 6640 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91360 kB' 'Slab: 172484 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.150 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:20:38.151 node0=1024 expecting 1024 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:20:38.151 00:20:38.151 real 0m1.074s 00:20:38.151 user 0m0.504s 00:20:38.151 sys 0m0.516s 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:38.151 00:46:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:20:38.151 ************************************ 00:20:38.151 END TEST default_setup 00:20:38.151 ************************************ 00:20:38.151 00:46:41 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:20:38.151 00:46:41 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:38.151 00:46:41 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:38.151 00:46:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:20:38.151 ************************************ 00:20:38.151 START TEST per_node_1G_alloc 00:20:38.151 ************************************ 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:20:38.151 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:38.412 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:38.412 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:38.412 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7181188 kB' 'MemAvailable: 10552288 kB' 'Buffers: 2436 kB' 'Cached: 3570616 kB' 'SwapCached: 0 kB' 'Active: 890644 kB' 'Inactive: 2803324 kB' 'Active(anon): 131384 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122504 kB' 'Mapped: 49112 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172532 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81172 kB' 'KernelStack: 6648 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.412 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:20:38.413 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7181772 kB' 'MemAvailable: 10552872 kB' 'Buffers: 2436 kB' 'Cached: 3570616 kB' 'SwapCached: 0 kB' 'Active: 890248 kB' 'Inactive: 2803324 kB' 'Active(anon): 130988 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122068 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172532 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81172 kB' 'KernelStack: 6660 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.679 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.680 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7182068 kB' 'MemAvailable: 10553168 kB' 'Buffers: 2436 kB' 'Cached: 3570616 kB' 'SwapCached: 0 kB' 'Active: 890032 kB' 'Inactive: 2803324 kB' 'Active(anon): 130772 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121896 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172528 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81168 kB' 'KernelStack: 6660 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.681 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.682 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:20:38.683 nr_hugepages=512 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:20:38.683 resv_hugepages=0 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:20:38.683 surplus_hugepages=0 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:20:38.683 anon_hugepages=0 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7182068 kB' 'MemAvailable: 10553168 kB' 'Buffers: 2436 kB' 'Cached: 3570616 kB' 'SwapCached: 0 kB' 'Active: 890248 kB' 'Inactive: 2803324 kB' 'Active(anon): 130988 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122116 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172528 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81168 kB' 'KernelStack: 6644 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.683 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.684 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7181816 kB' 'MemUsed: 5060156 kB' 'SwapCached: 0 kB' 'Active: 890468 kB' 'Inactive: 2803324 kB' 'Active(anon): 131208 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 3573052 kB' 'Mapped: 49184 kB' 'AnonPages: 122340 kB' 'Shmem: 10468 kB' 'KernelStack: 6660 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91360 kB' 'Slab: 172524 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.685 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.686 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:38.687 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:38.687 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:38.687 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:38.687 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:20:38.687 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:20:38.687 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:20:38.687 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:20:38.687 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:20:38.687 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:20:38.687 node0=512 expecting 512 00:20:38.687 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:20:38.687 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:20:38.687 00:20:38.687 real 0m0.553s 00:20:38.687 user 0m0.259s 00:20:38.687 sys 0m0.330s 00:20:38.687 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:38.687 00:46:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:20:38.687 ************************************ 00:20:38.687 END TEST per_node_1G_alloc 00:20:38.687 ************************************ 00:20:38.687 00:46:41 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:20:38.687 00:46:41 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:38.687 00:46:41 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:38.687 00:46:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:20:38.687 ************************************ 00:20:38.687 START TEST even_2G_alloc 00:20:38.687 ************************************ 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:20:38.687 00:46:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:38.964 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:38.964 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:38.964 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6128096 kB' 'MemAvailable: 9499200 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 890476 kB' 'Inactive: 2803328 kB' 'Active(anon): 131216 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122328 kB' 'Mapped: 48916 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172516 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81156 kB' 'KernelStack: 6628 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.226 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.227 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6128432 kB' 'MemAvailable: 9499536 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 890356 kB' 'Inactive: 2803328 kB' 'Active(anon): 131096 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122204 kB' 'Mapped: 48800 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172516 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81156 kB' 'KernelStack: 6624 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.228 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.229 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6128432 kB' 'MemAvailable: 9499536 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 890188 kB' 'Inactive: 2803328 kB' 'Active(anon): 130928 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122164 kB' 'Mapped: 49312 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172512 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81152 kB' 'KernelStack: 6672 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.230 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.231 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:20:39.232 nr_hugepages=1024 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:20:39.232 resv_hugepages=0 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:20:39.232 surplus_hugepages=0 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:20:39.232 anon_hugepages=0 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6128432 kB' 'MemAvailable: 9499532 kB' 'Buffers: 2436 kB' 'Cached: 3570616 kB' 'SwapCached: 0 kB' 'Active: 890120 kB' 'Inactive: 2803324 kB' 'Active(anon): 130860 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803324 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122080 kB' 'Mapped: 48780 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172508 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81148 kB' 'KernelStack: 6624 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.232 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.233 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6128432 kB' 'MemUsed: 6113540 kB' 'SwapCached: 0 kB' 'Active: 890048 kB' 'Inactive: 2803328 kB' 'Active(anon): 130788 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 3573056 kB' 'Mapped: 48792 kB' 'AnonPages: 121972 kB' 'Shmem: 10468 kB' 'KernelStack: 6640 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91360 kB' 'Slab: 172512 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.234 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:20:39.235 node0=1024 expecting 1024 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:20:39.235 00:46:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:20:39.235 00:20:39.235 real 0m0.581s 00:20:39.235 user 0m0.267s 00:20:39.235 sys 0m0.318s 00:20:39.236 00:46:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:39.236 00:46:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:20:39.236 ************************************ 00:20:39.236 END TEST even_2G_alloc 00:20:39.236 ************************************ 00:20:39.236 00:46:42 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:20:39.236 00:46:42 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:39.236 00:46:42 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:39.236 00:46:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:20:39.236 ************************************ 00:20:39.236 START TEST odd_alloc 00:20:39.236 ************************************ 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:20:39.236 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:39.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:39.805 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:39.805 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6125836 kB' 'MemAvailable: 9496940 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 890472 kB' 'Inactive: 2803328 kB' 'Active(anon): 131212 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122684 kB' 'Mapped: 48960 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172544 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81184 kB' 'KernelStack: 6628 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.805 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6125836 kB' 'MemAvailable: 9496940 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 890060 kB' 'Inactive: 2803328 kB' 'Active(anon): 130800 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122224 kB' 'Mapped: 48952 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172540 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81180 kB' 'KernelStack: 6596 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.806 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6125584 kB' 'MemAvailable: 9496688 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 889948 kB' 'Inactive: 2803328 kB' 'Active(anon): 130688 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122140 kB' 'Mapped: 48792 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172536 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81176 kB' 'KernelStack: 6640 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.807 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:20:39.808 nr_hugepages=1025 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:20:39.808 resv_hugepages=0 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:20:39.808 surplus_hugepages=0 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:20:39.808 anon_hugepages=0 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6125584 kB' 'MemAvailable: 9496688 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 889984 kB' 'Inactive: 2803328 kB' 'Active(anon): 130724 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122140 kB' 'Mapped: 48792 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172536 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81176 kB' 'KernelStack: 6624 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.808 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.808 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.808 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.808 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.808 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.808 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6125584 kB' 'MemUsed: 6116388 kB' 'SwapCached: 0 kB' 'Active: 889932 kB' 'Inactive: 2803328 kB' 'Active(anon): 130672 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 3573056 kB' 'Mapped: 48792 kB' 'AnonPages: 122116 kB' 'Shmem: 10468 kB' 'KernelStack: 6624 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91360 kB' 'Slab: 172536 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.809 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:20:39.810 node0=1025 expecting 1025 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:20:39.810 00:20:39.810 real 0m0.549s 00:20:39.810 user 0m0.292s 00:20:39.810 sys 0m0.291s 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:39.810 00:46:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:20:39.810 ************************************ 00:20:39.810 END TEST odd_alloc 00:20:39.810 ************************************ 00:20:39.810 00:46:43 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:20:39.810 00:46:43 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:39.810 00:46:43 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:39.810 00:46:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:20:40.069 ************************************ 00:20:40.069 START TEST custom_alloc 00:20:40.069 ************************************ 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:20:40.069 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:40.333 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:40.333 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:40.333 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7175596 kB' 'MemAvailable: 10546700 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 890460 kB' 'Inactive: 2803328 kB' 'Active(anon): 131200 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122292 kB' 'Mapped: 48900 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172500 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81140 kB' 'KernelStack: 6596 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.333 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.334 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7175596 kB' 'MemAvailable: 10546700 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 890056 kB' 'Inactive: 2803328 kB' 'Active(anon): 130796 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121900 kB' 'Mapped: 48792 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172500 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81140 kB' 'KernelStack: 6640 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.335 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7175596 kB' 'MemAvailable: 10546700 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 890316 kB' 'Inactive: 2803328 kB' 'Active(anon): 131056 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122160 kB' 'Mapped: 48792 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172500 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81140 kB' 'KernelStack: 6640 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.336 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.337 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:20:40.338 nr_hugepages=512 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:20:40.338 resv_hugepages=0 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:20:40.338 surplus_hugepages=0 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:20:40.338 anon_hugepages=0 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7175596 kB' 'MemAvailable: 10546700 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 889996 kB' 'Inactive: 2803328 kB' 'Active(anon): 130736 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122068 kB' 'Mapped: 48792 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172496 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81136 kB' 'KernelStack: 6624 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.338 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.598 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7175596 kB' 'MemUsed: 5066376 kB' 'SwapCached: 0 kB' 'Active: 890236 kB' 'Inactive: 2803328 kB' 'Active(anon): 130976 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 3573056 kB' 'Mapped: 48792 kB' 'AnonPages: 122084 kB' 'Shmem: 10468 kB' 'KernelStack: 6624 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91360 kB' 'Slab: 172484 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.599 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:20:40.600 node0=512 expecting 512 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:20:40.600 00:20:40.600 real 0m0.557s 00:20:40.600 user 0m0.276s 00:20:40.600 sys 0m0.317s 00:20:40.600 00:46:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:40.601 00:46:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:20:40.601 ************************************ 00:20:40.601 END TEST custom_alloc 00:20:40.601 ************************************ 00:20:40.601 00:46:43 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:20:40.601 00:46:43 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:40.601 00:46:43 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:40.601 00:46:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:20:40.601 ************************************ 00:20:40.601 START TEST no_shrink_alloc 00:20:40.601 ************************************ 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:20:40.601 00:46:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:40.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:40.860 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:40.860 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6122792 kB' 'MemAvailable: 9493896 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 885564 kB' 'Inactive: 2803328 kB' 'Active(anon): 126304 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117400 kB' 'Mapped: 48220 kB' 'Shmem: 10468 kB' 'KReclaimable: 91360 kB' 'Slab: 172412 kB' 'SReclaimable: 91360 kB' 'SUnreclaim: 81052 kB' 'KernelStack: 6596 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.860 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.861 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.862 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.862 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.862 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.862 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.862 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.862 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.862 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:40.862 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:40.862 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:40.862 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:40.862 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:20:41.124 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6122792 kB' 'MemAvailable: 9493892 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 885372 kB' 'Inactive: 2803328 kB' 'Active(anon): 126112 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117184 kB' 'Mapped: 48008 kB' 'Shmem: 10468 kB' 'KReclaimable: 91352 kB' 'Slab: 172340 kB' 'SReclaimable: 91352 kB' 'SUnreclaim: 80988 kB' 'KernelStack: 6560 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.125 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:41.126 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6122792 kB' 'MemAvailable: 9493892 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 885116 kB' 'Inactive: 2803328 kB' 'Active(anon): 125856 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117244 kB' 'Mapped: 48008 kB' 'Shmem: 10468 kB' 'KReclaimable: 91352 kB' 'Slab: 172332 kB' 'SReclaimable: 91352 kB' 'SUnreclaim: 80980 kB' 'KernelStack: 6528 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.127 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:20:41.128 nr_hugepages=1024 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:20:41.128 resv_hugepages=0 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:20:41.128 surplus_hugepages=0 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:20:41.128 anon_hugepages=0 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6123664 kB' 'MemAvailable: 9494764 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 885068 kB' 'Inactive: 2803328 kB' 'Active(anon): 125808 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 116964 kB' 'Mapped: 48052 kB' 'Shmem: 10468 kB' 'KReclaimable: 91352 kB' 'Slab: 172356 kB' 'SReclaimable: 91352 kB' 'SUnreclaim: 81004 kB' 'KernelStack: 6528 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.128 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.129 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6123412 kB' 'MemUsed: 6118560 kB' 'SwapCached: 0 kB' 'Active: 885316 kB' 'Inactive: 2803328 kB' 'Active(anon): 126056 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 3573056 kB' 'Mapped: 48052 kB' 'AnonPages: 117208 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91352 kB' 'Slab: 172356 kB' 'SReclaimable: 91352 kB' 'SUnreclaim: 81004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.130 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.131 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:20:41.132 node0=1024 expecting 1024 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:20:41.132 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:41.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:41.391 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:41.391 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:41.391 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6124176 kB' 'MemAvailable: 9495276 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 885868 kB' 'Inactive: 2803328 kB' 'Active(anon): 126608 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117764 kB' 'Mapped: 48176 kB' 'Shmem: 10468 kB' 'KReclaimable: 91352 kB' 'Slab: 172204 kB' 'SReclaimable: 91352 kB' 'SUnreclaim: 80852 kB' 'KernelStack: 6608 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.391 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.392 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6124176 kB' 'MemAvailable: 9495276 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 885364 kB' 'Inactive: 2803328 kB' 'Active(anon): 126104 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117224 kB' 'Mapped: 48236 kB' 'Shmem: 10468 kB' 'KReclaimable: 91352 kB' 'Slab: 172196 kB' 'SReclaimable: 91352 kB' 'SUnreclaim: 80844 kB' 'KernelStack: 6528 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.653 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6124176 kB' 'MemAvailable: 9495276 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 885556 kB' 'Inactive: 2803328 kB' 'Active(anon): 126296 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117184 kB' 'Mapped: 48236 kB' 'Shmem: 10468 kB' 'KReclaimable: 91352 kB' 'Slab: 172196 kB' 'SReclaimable: 91352 kB' 'SUnreclaim: 80844 kB' 'KernelStack: 6544 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.654 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:20:41.655 nr_hugepages=1024 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:20:41.655 resv_hugepages=0 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:20:41.655 surplus_hugepages=0 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:20:41.655 anon_hugepages=0 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6124176 kB' 'MemAvailable: 9495276 kB' 'Buffers: 2436 kB' 'Cached: 3570620 kB' 'SwapCached: 0 kB' 'Active: 885160 kB' 'Inactive: 2803328 kB' 'Active(anon): 125900 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117016 kB' 'Mapped: 48048 kB' 'Shmem: 10468 kB' 'KReclaimable: 91352 kB' 'Slab: 172196 kB' 'SReclaimable: 91352 kB' 'SUnreclaim: 80844 kB' 'KernelStack: 6572 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.655 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6124176 kB' 'MemUsed: 6117796 kB' 'SwapCached: 0 kB' 'Active: 885152 kB' 'Inactive: 2803328 kB' 'Active(anon): 125892 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2803328 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 3573056 kB' 'Mapped: 48048 kB' 'AnonPages: 117264 kB' 'Shmem: 10468 kB' 'KernelStack: 6572 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91352 kB' 'Slab: 172196 kB' 'SReclaimable: 91352 kB' 'SUnreclaim: 80844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.656 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:20:41.657 node0=1024 expecting 1024 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:20:41.657 00:20:41.657 real 0m1.093s 00:20:41.657 user 0m0.528s 00:20:41.657 sys 0m0.632s 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:41.657 00:46:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:20:41.657 ************************************ 00:20:41.657 END TEST no_shrink_alloc 00:20:41.657 ************************************ 00:20:41.657 00:46:44 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:20:41.657 00:46:44 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:20:41.657 00:46:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:20:41.657 00:46:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:20:41.657 00:46:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:20:41.657 00:46:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:20:41.657 00:46:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:20:41.657 00:46:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:20:41.657 00:46:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:20:41.657 00:20:41.657 real 0m4.874s 00:20:41.657 user 0m2.287s 00:20:41.657 sys 0m2.694s 00:20:41.657 00:46:44 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:41.657 00:46:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:20:41.657 ************************************ 00:20:41.657 END TEST hugepages 00:20:41.657 ************************************ 00:20:41.657 00:46:44 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:20:41.657 00:46:44 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:41.657 00:46:44 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:41.657 00:46:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:20:41.657 ************************************ 00:20:41.657 START TEST driver 00:20:41.657 ************************************ 00:20:41.657 00:46:44 setup.sh.driver -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:20:41.915 * Looking for test storage... 00:20:41.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:20:41.915 00:46:44 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:20:41.915 00:46:44 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:20:41.915 00:46:44 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:42.520 00:46:45 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:20:42.520 00:46:45 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:42.520 00:46:45 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:42.520 00:46:45 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:20:42.520 ************************************ 00:20:42.520 START TEST guess_driver 00:20:42.520 ************************************ 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:20:42.520 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:20:42.520 Looking for driver=uio_pci_generic 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:20:42.520 00:46:45 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:20:43.087 00:46:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:20:43.088 00:46:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:20:43.088 00:46:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:20:43.346 00:46:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:20:43.346 00:46:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:20:43.346 00:46:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:20:43.346 00:46:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:20:43.346 00:46:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:20:43.346 00:46:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:20:43.346 00:46:46 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:20:43.346 00:46:46 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:20:43.346 00:46:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:20:43.346 00:46:46 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:43.913 00:20:43.913 real 0m1.500s 00:20:43.913 user 0m0.575s 00:20:43.913 sys 0m0.924s 00:20:43.913 00:46:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:43.913 ************************************ 00:20:43.913 00:46:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:20:43.913 END TEST guess_driver 00:20:43.913 ************************************ 00:20:43.913 00:20:43.913 real 0m2.236s 00:20:43.913 user 0m0.817s 00:20:43.913 sys 0m1.473s 00:20:43.913 00:46:47 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:43.913 ************************************ 00:20:43.913 END TEST driver 00:20:43.913 00:46:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:20:43.913 ************************************ 00:20:43.913 00:46:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:20:43.914 00:46:47 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:43.914 00:46:47 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:43.914 00:46:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:20:43.914 ************************************ 00:20:43.914 START TEST devices 00:20:43.914 ************************************ 00:20:43.914 00:46:47 setup.sh.devices -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:20:44.171 * Looking for test storage... 00:20:44.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:20:44.172 00:46:47 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:20:44.172 00:46:47 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:20:44.172 00:46:47 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:20:44.172 00:46:47 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:44.739 00:46:48 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n2 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n2 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n3 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n3 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:44.739 00:46:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:20:44.739 00:46:48 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:20:44.739 00:46:48 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:20:44.739 00:46:48 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:20:44.739 00:46:48 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:20:44.739 00:46:48 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:20:44.739 00:46:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:20:44.739 00:46:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:20:44.739 00:46:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:20:44.739 00:46:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:20:44.739 00:46:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:20:44.739 00:46:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:20:44.739 00:46:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:44.739 00:46:48 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:44.998 No valid GPT data, bailing 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:20:44.998 00:46:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:20:44.998 00:46:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:44.998 00:46:48 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:44.998 No valid GPT data, bailing 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:20:44.998 00:46:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:20:44.998 00:46:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:44.998 00:46:48 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:44.998 No valid GPT data, bailing 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:20:44.998 00:46:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:20:44.998 00:46:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:44.998 00:46:48 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:20:44.998 00:46:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:44.998 00:46:48 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:45.257 No valid GPT data, bailing 00:20:45.257 00:46:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:45.257 00:46:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:20:45.257 00:46:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:20:45.257 00:46:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:20:45.257 00:46:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:20:45.257 00:46:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:45.257 00:46:48 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:20:45.257 00:46:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:20:45.257 00:46:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:20:45.257 00:46:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:20:45.257 00:46:48 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:20:45.257 00:46:48 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:20:45.257 00:46:48 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:20:45.257 00:46:48 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:45.257 00:46:48 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:45.257 00:46:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:20:45.257 ************************************ 00:20:45.257 START TEST nvme_mount 00:20:45.257 ************************************ 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:20:45.257 00:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:20:46.193 Creating new GPT entries in memory. 00:20:46.193 GPT data structures destroyed! You may now partition the disk using fdisk or 00:20:46.193 other utilities. 00:20:46.193 00:46:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:20:46.193 00:46:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:20:46.193 00:46:49 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:20:46.193 00:46:49 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:20:46.193 00:46:49 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:20:47.571 Creating new GPT entries in memory. 00:20:47.571 The operation has completed successfully. 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 71314 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:47.571 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:47.868 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:47.868 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:47.868 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:47.868 00:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:47.868 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:20:47.868 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:20:47.869 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:47.869 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:20:47.869 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:20:47.869 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:20:47.869 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:47.869 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:47.869 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:20:47.869 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:20:47.869 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:20:47.869 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:20:47.869 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:20:48.141 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:20:48.141 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:20:48.141 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:20:48.141 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:20:48.141 00:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:20:48.399 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:48.399 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:20:48.399 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:20:48.399 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:48.399 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:48.399 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:48.657 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:48.657 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:48.657 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:48.657 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:48.657 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:20:48.657 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:20:48.657 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:48.657 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:20:48.657 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:20:48.657 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:48.658 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:20:48.658 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:20:48.658 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:20:48.658 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:20:48.658 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:20:48.658 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:20:48.658 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:20:48.658 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:20:48.658 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:48.658 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:20:48.658 00:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:20:48.658 00:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:20:48.658 00:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:20:48.916 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:48.916 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:20:48.916 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:20:48.916 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:48.916 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:48.916 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:49.175 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:49.175 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:49.175 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:49.175 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:49.434 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:20:49.434 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:20:49.434 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:20:49.434 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:20:49.434 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:49.434 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:20:49.434 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:20:49.434 00:46:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:20:49.434 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:20:49.434 00:20:49.434 real 0m4.127s 00:20:49.434 user 0m0.765s 00:20:49.434 sys 0m1.087s 00:20:49.434 00:46:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:49.434 00:46:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:20:49.434 ************************************ 00:20:49.434 END TEST nvme_mount 00:20:49.434 ************************************ 00:20:49.434 00:46:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:20:49.434 00:46:52 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:49.434 00:46:52 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:49.434 00:46:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:20:49.434 ************************************ 00:20:49.434 START TEST dm_mount 00:20:49.434 ************************************ 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:20:49.434 00:46:52 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:20:50.370 Creating new GPT entries in memory. 00:20:50.370 GPT data structures destroyed! You may now partition the disk using fdisk or 00:20:50.370 other utilities. 00:20:50.370 00:46:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:20:50.370 00:46:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:20:50.370 00:46:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:20:50.370 00:46:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:20:50.370 00:46:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:20:51.745 Creating new GPT entries in memory. 00:20:51.745 The operation has completed successfully. 00:20:51.745 00:46:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:20:51.745 00:46:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:20:51.745 00:46:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:20:51.745 00:46:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:20:51.745 00:46:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:20:52.680 The operation has completed successfully. 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 71747 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:52.680 00:46:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:52.938 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:52.938 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:52.938 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:52.938 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:53.197 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:53.458 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:53.458 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:53.458 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:53.458 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:20:53.731 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:20:53.731 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:20:53.731 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:20:53.731 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:20:53.731 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:20:53.731 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:20:53.731 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:20:53.731 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:20:53.731 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:20:53.731 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:20:53.731 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:20:53.731 00:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:20:53.731 00:20:53.731 real 0m4.268s 00:20:53.731 user 0m0.475s 00:20:53.731 sys 0m0.738s 00:20:53.731 00:46:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:53.731 00:46:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:20:53.731 ************************************ 00:20:53.731 END TEST dm_mount 00:20:53.731 ************************************ 00:20:53.731 00:46:56 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:20:53.731 00:46:56 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:20:53.731 00:46:56 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:20:53.731 00:46:56 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:20:53.731 00:46:56 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:20:53.731 00:46:56 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:20:53.731 00:46:56 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:20:53.990 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:20:53.990 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:20:53.990 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:20:53.990 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:20:53.990 00:46:57 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:20:53.990 00:46:57 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:20:53.990 00:46:57 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:20:53.990 00:46:57 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:20:53.990 00:46:57 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:20:53.990 00:46:57 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:20:53.990 00:46:57 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:20:53.990 00:20:53.990 real 0m9.976s 00:20:53.990 user 0m1.918s 00:20:53.990 sys 0m2.445s 00:20:53.990 00:46:57 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:53.990 ************************************ 00:20:53.990 END TEST devices 00:20:53.990 00:46:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:20:53.990 ************************************ 00:20:53.990 00:20:53.990 real 0m22.300s 00:20:53.990 user 0m7.278s 00:20:53.990 sys 0m9.521s 00:20:53.990 00:46:57 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:53.990 00:46:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:20:53.990 ************************************ 00:20:53.990 END TEST setup.sh 00:20:53.990 ************************************ 00:20:53.990 00:46:57 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:20:54.926 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:54.926 Hugepages 00:20:54.926 node hugesize free / total 00:20:54.926 node0 1048576kB 0 / 0 00:20:54.926 node0 2048kB 2048 / 2048 00:20:54.926 00:20:54.926 Type BDF Vendor Device NUMA Driver Device Block devices 00:20:54.926 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:20:54.926 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:20:54.926 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:20:54.926 00:46:58 -- spdk/autotest.sh@130 -- # uname -s 00:20:54.926 00:46:58 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:20:54.926 00:46:58 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:20:54.926 00:46:58 -- common/autotest_common.sh@1528 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:55.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:55.862 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:55.862 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:55.862 00:46:59 -- common/autotest_common.sh@1529 -- # sleep 1 00:20:56.798 00:47:00 -- common/autotest_common.sh@1530 -- # bdfs=() 00:20:56.798 00:47:00 -- common/autotest_common.sh@1530 -- # local bdfs 00:20:56.798 00:47:00 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:20:56.798 00:47:00 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:20:56.798 00:47:00 -- common/autotest_common.sh@1510 -- # bdfs=() 00:20:56.798 00:47:00 -- common/autotest_common.sh@1510 -- # local bdfs 00:20:56.798 00:47:00 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:56.798 00:47:00 -- common/autotest_common.sh@1511 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:56.798 00:47:00 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:20:56.798 00:47:00 -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:20:56.798 00:47:00 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:56.798 00:47:00 -- common/autotest_common.sh@1533 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:57.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:57.380 Waiting for block devices as requested 00:20:57.380 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:57.380 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:57.639 00:47:00 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:20:57.639 00:47:00 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:20:57.639 00:47:00 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:20:57.639 00:47:00 -- common/autotest_common.sh@1499 -- # grep 0000:00:10.0/nvme/nvme 00:20:57.639 00:47:00 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:20:57.639 00:47:00 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:20:57.639 00:47:00 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:20:57.639 00:47:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme1 00:20:57.639 00:47:00 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme1 00:20:57.639 00:47:00 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme1 ]] 00:20:57.639 00:47:00 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme1 00:20:57.639 00:47:00 -- common/autotest_common.sh@1542 -- # grep oacs 00:20:57.639 00:47:00 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:20:57.639 00:47:00 -- common/autotest_common.sh@1542 -- # oacs=' 0x12a' 00:20:57.639 00:47:00 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:20:57.639 00:47:00 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:20:57.639 00:47:00 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme1 00:20:57.639 00:47:00 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:20:57.639 00:47:00 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:20:57.639 00:47:00 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:20:57.639 00:47:00 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:20:57.639 00:47:00 -- common/autotest_common.sh@1554 -- # continue 00:20:57.639 00:47:00 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:20:57.639 00:47:00 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:20:57.639 00:47:00 -- common/autotest_common.sh@1499 -- # grep 0000:00:11.0/nvme/nvme 00:20:57.639 00:47:00 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:20:57.639 00:47:00 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:20:57.639 00:47:00 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:20:57.639 00:47:00 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:20:57.639 00:47:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:20:57.639 00:47:00 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:20:57.639 00:47:00 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:20:57.639 00:47:00 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:20:57.639 00:47:00 -- common/autotest_common.sh@1542 -- # grep oacs 00:20:57.639 00:47:00 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:20:57.639 00:47:00 -- common/autotest_common.sh@1542 -- # oacs=' 0x12a' 00:20:57.639 00:47:00 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:20:57.639 00:47:00 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:20:57.639 00:47:00 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:20:57.639 00:47:00 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:20:57.639 00:47:00 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:20:57.639 00:47:00 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:20:57.639 00:47:00 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:20:57.639 00:47:00 -- common/autotest_common.sh@1554 -- # continue 00:20:57.639 00:47:00 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:20:57.639 00:47:00 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:57.639 00:47:00 -- common/autotest_common.sh@10 -- # set +x 00:20:57.639 00:47:00 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:20:57.639 00:47:00 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:57.639 00:47:00 -- common/autotest_common.sh@10 -- # set +x 00:20:57.639 00:47:00 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:58.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:58.465 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:58.465 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:58.465 00:47:01 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:20:58.465 00:47:01 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:58.465 00:47:01 -- common/autotest_common.sh@10 -- # set +x 00:20:58.465 00:47:01 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:20:58.465 00:47:01 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:20:58.465 00:47:01 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:20:58.465 00:47:01 -- common/autotest_common.sh@1574 -- # bdfs=() 00:20:58.465 00:47:01 -- common/autotest_common.sh@1574 -- # local bdfs 00:20:58.465 00:47:01 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:20:58.465 00:47:01 -- common/autotest_common.sh@1510 -- # bdfs=() 00:20:58.465 00:47:01 -- common/autotest_common.sh@1510 -- # local bdfs 00:20:58.465 00:47:01 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:58.465 00:47:01 -- common/autotest_common.sh@1511 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:58.465 00:47:01 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:20:58.724 00:47:01 -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:20:58.724 00:47:01 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:58.724 00:47:01 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:20:58.724 00:47:01 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:20:58.724 00:47:01 -- common/autotest_common.sh@1577 -- # device=0x0010 00:20:58.724 00:47:01 -- common/autotest_common.sh@1578 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:20:58.724 00:47:01 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:20:58.724 00:47:01 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:20:58.724 00:47:01 -- common/autotest_common.sh@1577 -- # device=0x0010 00:20:58.724 00:47:01 -- common/autotest_common.sh@1578 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:20:58.724 00:47:01 -- common/autotest_common.sh@1583 -- # printf '%s\n' 00:20:58.724 00:47:01 -- common/autotest_common.sh@1589 -- # [[ -z '' ]] 00:20:58.724 00:47:01 -- common/autotest_common.sh@1590 -- # return 0 00:20:58.724 00:47:01 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:20:58.724 00:47:01 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:20:58.724 00:47:01 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:20:58.724 00:47:01 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:20:58.724 00:47:01 -- spdk/autotest.sh@162 -- # timing_enter lib 00:20:58.724 00:47:01 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:58.724 00:47:01 -- common/autotest_common.sh@10 -- # set +x 00:20:58.724 00:47:01 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:20:58.724 00:47:01 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:58.724 00:47:01 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:58.724 00:47:01 -- common/autotest_common.sh@10 -- # set +x 00:20:58.724 ************************************ 00:20:58.724 START TEST env 00:20:58.724 ************************************ 00:20:58.724 00:47:01 env -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:20:58.724 * Looking for test storage... 00:20:58.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:20:58.724 00:47:01 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:20:58.724 00:47:01 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:58.724 00:47:01 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:58.724 00:47:01 env -- common/autotest_common.sh@10 -- # set +x 00:20:58.724 ************************************ 00:20:58.724 START TEST env_memory 00:20:58.724 ************************************ 00:20:58.724 00:47:01 env.env_memory -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:20:58.724 00:20:58.724 00:20:58.724 CUnit - A unit testing framework for C - Version 2.1-3 00:20:58.724 http://cunit.sourceforge.net/ 00:20:58.724 00:20:58.724 00:20:58.724 Suite: memory 00:20:58.724 Test: alloc and free memory map ...[2024-05-15 00:47:01.959965] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:20:58.724 passed 00:20:58.724 Test: mem map translation ...[2024-05-15 00:47:01.992053] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:20:58.724 [2024-05-15 00:47:01.992404] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:20:58.724 [2024-05-15 00:47:01.992617] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:20:58.724 [2024-05-15 00:47:01.992840] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:20:58.984 passed 00:20:58.984 Test: mem map registration ...[2024-05-15 00:47:02.057993] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:20:58.984 [2024-05-15 00:47:02.058346] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:20:58.984 passed 00:20:58.984 Test: mem map adjacent registrations ...passed 00:20:58.984 00:20:58.984 Run Summary: Type Total Ran Passed Failed Inactive 00:20:58.984 suites 1 1 n/a 0 0 00:20:58.984 tests 4 4 4 0 0 00:20:58.984 asserts 152 152 152 0 n/a 00:20:58.984 00:20:58.984 Elapsed time = 0.217 seconds 00:20:58.984 ************************************ 00:20:58.984 END TEST env_memory 00:20:58.984 ************************************ 00:20:58.984 00:20:58.984 real 0m0.234s 00:20:58.984 user 0m0.209s 00:20:58.984 sys 0m0.021s 00:20:58.984 00:47:02 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:58.984 00:47:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:20:58.984 00:47:02 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:20:58.984 00:47:02 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:58.984 00:47:02 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:58.984 00:47:02 env -- common/autotest_common.sh@10 -- # set +x 00:20:58.984 ************************************ 00:20:58.984 START TEST env_vtophys 00:20:58.984 ************************************ 00:20:58.984 00:47:02 env.env_vtophys -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:20:58.984 EAL: lib.eal log level changed from notice to debug 00:20:58.984 EAL: Detected lcore 0 as core 0 on socket 0 00:20:58.984 EAL: Detected lcore 1 as core 0 on socket 0 00:20:58.984 EAL: Detected lcore 2 as core 0 on socket 0 00:20:58.984 EAL: Detected lcore 3 as core 0 on socket 0 00:20:58.984 EAL: Detected lcore 4 as core 0 on socket 0 00:20:58.984 EAL: Detected lcore 5 as core 0 on socket 0 00:20:58.984 EAL: Detected lcore 6 as core 0 on socket 0 00:20:58.984 EAL: Detected lcore 7 as core 0 on socket 0 00:20:58.984 EAL: Detected lcore 8 as core 0 on socket 0 00:20:58.984 EAL: Detected lcore 9 as core 0 on socket 0 00:20:58.984 EAL: Maximum logical cores by configuration: 128 00:20:58.984 EAL: Detected CPU lcores: 10 00:20:58.984 EAL: Detected NUMA nodes: 1 00:20:58.984 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:20:58.984 EAL: Detected shared linkage of DPDK 00:20:58.984 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:20:58.984 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:20:58.984 EAL: Registered [vdev] bus. 00:20:58.984 EAL: bus.vdev log level changed from disabled to notice 00:20:58.984 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:20:58.984 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:20:58.984 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:20:58.984 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:20:58.984 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:20:58.984 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:20:58.984 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:20:58.984 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:20:58.984 EAL: No shared files mode enabled, IPC will be disabled 00:20:58.984 EAL: No shared files mode enabled, IPC is disabled 00:20:58.984 EAL: Selected IOVA mode 'PA' 00:20:58.984 EAL: Probing VFIO support... 00:20:58.984 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:20:58.984 EAL: VFIO modules not loaded, skipping VFIO support... 00:20:58.984 EAL: Ask a virtual area of 0x2e000 bytes 00:20:58.984 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:20:58.984 EAL: Setting up physically contiguous memory... 00:20:58.984 EAL: Setting maximum number of open files to 524288 00:20:58.984 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:20:58.984 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:20:58.984 EAL: Ask a virtual area of 0x61000 bytes 00:20:58.984 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:20:58.984 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:58.984 EAL: Ask a virtual area of 0x400000000 bytes 00:20:58.984 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:20:58.984 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:20:58.984 EAL: Ask a virtual area of 0x61000 bytes 00:20:58.984 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:20:58.984 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:58.984 EAL: Ask a virtual area of 0x400000000 bytes 00:20:58.984 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:20:58.984 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:20:58.984 EAL: Ask a virtual area of 0x61000 bytes 00:20:58.984 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:20:58.984 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:58.984 EAL: Ask a virtual area of 0x400000000 bytes 00:20:58.984 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:20:58.984 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:20:58.984 EAL: Ask a virtual area of 0x61000 bytes 00:20:58.984 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:20:58.985 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:58.985 EAL: Ask a virtual area of 0x400000000 bytes 00:20:58.985 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:20:58.985 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:20:58.985 EAL: Hugepages will be freed exactly as allocated. 00:20:58.985 EAL: No shared files mode enabled, IPC is disabled 00:20:58.985 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: TSC frequency is ~2200000 KHz 00:20:59.244 EAL: Main lcore 0 is ready (tid=7fb2dd628a00;cpuset=[0]) 00:20:59.244 EAL: Trying to obtain current memory policy. 00:20:59.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:59.244 EAL: Restoring previous memory policy: 0 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was expanded by 2MB 00:20:59.244 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: No PCI address specified using 'addr=' in: bus=pci 00:20:59.244 EAL: Mem event callback 'spdk:(nil)' registered 00:20:59.244 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:20:59.244 00:20:59.244 00:20:59.244 CUnit - A unit testing framework for C - Version 2.1-3 00:20:59.244 http://cunit.sourceforge.net/ 00:20:59.244 00:20:59.244 00:20:59.244 Suite: components_suite 00:20:59.244 Test: vtophys_malloc_test ...passed 00:20:59.244 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:20:59.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:59.244 EAL: Restoring previous memory policy: 4 00:20:59.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was expanded by 4MB 00:20:59.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was shrunk by 4MB 00:20:59.244 EAL: Trying to obtain current memory policy. 00:20:59.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:59.244 EAL: Restoring previous memory policy: 4 00:20:59.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was expanded by 6MB 00:20:59.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was shrunk by 6MB 00:20:59.244 EAL: Trying to obtain current memory policy. 00:20:59.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:59.244 EAL: Restoring previous memory policy: 4 00:20:59.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was expanded by 10MB 00:20:59.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was shrunk by 10MB 00:20:59.244 EAL: Trying to obtain current memory policy. 00:20:59.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:59.244 EAL: Restoring previous memory policy: 4 00:20:59.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was expanded by 18MB 00:20:59.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was shrunk by 18MB 00:20:59.244 EAL: Trying to obtain current memory policy. 00:20:59.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:59.244 EAL: Restoring previous memory policy: 4 00:20:59.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was expanded by 34MB 00:20:59.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was shrunk by 34MB 00:20:59.244 EAL: Trying to obtain current memory policy. 00:20:59.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:59.244 EAL: Restoring previous memory policy: 4 00:20:59.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was expanded by 66MB 00:20:59.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was shrunk by 66MB 00:20:59.244 EAL: Trying to obtain current memory policy. 00:20:59.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:59.244 EAL: Restoring previous memory policy: 4 00:20:59.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.244 EAL: request: mp_malloc_sync 00:20:59.244 EAL: No shared files mode enabled, IPC is disabled 00:20:59.244 EAL: Heap on socket 0 was expanded by 130MB 00:20:59.502 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.502 EAL: request: mp_malloc_sync 00:20:59.502 EAL: No shared files mode enabled, IPC is disabled 00:20:59.502 EAL: Heap on socket 0 was shrunk by 130MB 00:20:59.502 EAL: Trying to obtain current memory policy. 00:20:59.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:59.502 EAL: Restoring previous memory policy: 4 00:20:59.502 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.502 EAL: request: mp_malloc_sync 00:20:59.502 EAL: No shared files mode enabled, IPC is disabled 00:20:59.502 EAL: Heap on socket 0 was expanded by 258MB 00:20:59.502 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.760 EAL: request: mp_malloc_sync 00:20:59.760 EAL: No shared files mode enabled, IPC is disabled 00:20:59.760 EAL: Heap on socket 0 was shrunk by 258MB 00:20:59.760 EAL: Trying to obtain current memory policy. 00:20:59.760 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:59.760 EAL: Restoring previous memory policy: 4 00:20:59.760 EAL: Calling mem event callback 'spdk:(nil)' 00:20:59.760 EAL: request: mp_malloc_sync 00:20:59.760 EAL: No shared files mode enabled, IPC is disabled 00:20:59.760 EAL: Heap on socket 0 was expanded by 514MB 00:21:00.019 EAL: Calling mem event callback 'spdk:(nil)' 00:21:00.019 EAL: request: mp_malloc_sync 00:21:00.019 EAL: No shared files mode enabled, IPC is disabled 00:21:00.019 EAL: Heap on socket 0 was shrunk by 514MB 00:21:00.019 EAL: Trying to obtain current memory policy. 00:21:00.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:21:00.586 EAL: Restoring previous memory policy: 4 00:21:00.586 EAL: Calling mem event callback 'spdk:(nil)' 00:21:00.586 EAL: request: mp_malloc_sync 00:21:00.586 EAL: No shared files mode enabled, IPC is disabled 00:21:00.586 EAL: Heap on socket 0 was expanded by 1026MB 00:21:00.845 EAL: Calling mem event callback 'spdk:(nil)' 00:21:01.105 passed 00:21:01.105 00:21:01.105 Run Summary: Type Total Ran Passed Failed Inactive 00:21:01.105 suites 1 1 n/a 0 0 00:21:01.105 tests 2 2 2 0 0 00:21:01.105 asserts 5218 5218 5218 0 n/a 00:21:01.105 00:21:01.105 Elapsed time = 1.876 seconds 00:21:01.105 EAL: request: mp_malloc_sync 00:21:01.105 EAL: No shared files mode enabled, IPC is disabled 00:21:01.105 EAL: Heap on socket 0 was shrunk by 1026MB 00:21:01.105 EAL: Calling mem event callback 'spdk:(nil)' 00:21:01.105 EAL: request: mp_malloc_sync 00:21:01.105 EAL: No shared files mode enabled, IPC is disabled 00:21:01.105 EAL: Heap on socket 0 was shrunk by 2MB 00:21:01.105 EAL: No shared files mode enabled, IPC is disabled 00:21:01.105 EAL: No shared files mode enabled, IPC is disabled 00:21:01.105 EAL: No shared files mode enabled, IPC is disabled 00:21:01.105 00:21:01.105 real 0m2.083s 00:21:01.105 user 0m1.194s 00:21:01.105 sys 0m0.745s 00:21:01.105 00:47:04 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:01.105 ************************************ 00:21:01.105 END TEST env_vtophys 00:21:01.105 00:47:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:21:01.105 ************************************ 00:21:01.105 00:47:04 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:21:01.105 00:47:04 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:01.105 00:47:04 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:01.105 00:47:04 env -- common/autotest_common.sh@10 -- # set +x 00:21:01.105 ************************************ 00:21:01.105 START TEST env_pci 00:21:01.105 ************************************ 00:21:01.105 00:47:04 env.env_pci -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:21:01.105 00:21:01.105 00:21:01.105 CUnit - A unit testing framework for C - Version 2.1-3 00:21:01.105 http://cunit.sourceforge.net/ 00:21:01.105 00:21:01.105 00:21:01.105 Suite: pci 00:21:01.105 Test: pci_hook ...[2024-05-15 00:47:04.356079] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 72946 has claimed it 00:21:01.105 passed 00:21:01.105 00:21:01.105 Run Summary: Type Total Ran Passed Failed Inactive 00:21:01.105 suites 1 1 n/a 0 0 00:21:01.105 tests 1 1 1 0 0 00:21:01.105 asserts 25 25 25 0 n/a 00:21:01.105 00:21:01.105 Elapsed time = 0.002 seconds 00:21:01.106 EAL: Cannot find device (10000:00:01.0) 00:21:01.106 EAL: Failed to attach device on primary process 00:21:01.106 00:21:01.106 real 0m0.019s 00:21:01.106 user 0m0.012s 00:21:01.106 sys 0m0.007s 00:21:01.106 00:47:04 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:01.106 00:47:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:21:01.106 ************************************ 00:21:01.106 END TEST env_pci 00:21:01.106 ************************************ 00:21:01.380 00:47:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:21:01.380 00:47:04 env -- env/env.sh@15 -- # uname 00:21:01.380 00:47:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:21:01.380 00:47:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:21:01.380 00:47:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:21:01.380 00:47:04 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:21:01.380 00:47:04 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:01.380 00:47:04 env -- common/autotest_common.sh@10 -- # set +x 00:21:01.380 ************************************ 00:21:01.380 START TEST env_dpdk_post_init 00:21:01.380 ************************************ 00:21:01.380 00:47:04 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:21:01.380 EAL: Detected CPU lcores: 10 00:21:01.380 EAL: Detected NUMA nodes: 1 00:21:01.380 EAL: Detected shared linkage of DPDK 00:21:01.380 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:21:01.380 EAL: Selected IOVA mode 'PA' 00:21:01.380 TELEMETRY: No legacy callbacks, legacy socket not created 00:21:01.380 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:21:01.380 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:21:01.380 Starting DPDK initialization... 00:21:01.380 Starting SPDK post initialization... 00:21:01.380 SPDK NVMe probe 00:21:01.380 Attaching to 0000:00:10.0 00:21:01.380 Attaching to 0000:00:11.0 00:21:01.380 Attached to 0000:00:10.0 00:21:01.380 Attached to 0000:00:11.0 00:21:01.380 Cleaning up... 00:21:01.380 00:21:01.380 real 0m0.187s 00:21:01.380 user 0m0.050s 00:21:01.380 sys 0m0.037s 00:21:01.380 00:47:04 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:01.380 ************************************ 00:21:01.380 END TEST env_dpdk_post_init 00:21:01.380 00:47:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.380 ************************************ 00:21:01.380 00:47:04 env -- env/env.sh@26 -- # uname 00:21:01.380 00:47:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:21:01.380 00:47:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:21:01.380 00:47:04 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:01.380 00:47:04 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:01.380 00:47:04 env -- common/autotest_common.sh@10 -- # set +x 00:21:01.380 ************************************ 00:21:01.380 START TEST env_mem_callbacks 00:21:01.380 ************************************ 00:21:01.380 00:47:04 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:21:01.638 EAL: Detected CPU lcores: 10 00:21:01.638 EAL: Detected NUMA nodes: 1 00:21:01.638 EAL: Detected shared linkage of DPDK 00:21:01.638 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:21:01.638 EAL: Selected IOVA mode 'PA' 00:21:01.638 TELEMETRY: No legacy callbacks, legacy socket not created 00:21:01.638 00:21:01.638 00:21:01.638 CUnit - A unit testing framework for C - Version 2.1-3 00:21:01.638 http://cunit.sourceforge.net/ 00:21:01.638 00:21:01.638 00:21:01.638 Suite: memory 00:21:01.638 Test: test ... 00:21:01.638 register 0x200000200000 2097152 00:21:01.638 malloc 3145728 00:21:01.638 register 0x200000400000 4194304 00:21:01.638 buf 0x200000500000 len 3145728 PASSED 00:21:01.638 malloc 64 00:21:01.638 buf 0x2000004fff40 len 64 PASSED 00:21:01.638 malloc 4194304 00:21:01.638 register 0x200000800000 6291456 00:21:01.638 buf 0x200000a00000 len 4194304 PASSED 00:21:01.638 free 0x200000500000 3145728 00:21:01.638 free 0x2000004fff40 64 00:21:01.638 unregister 0x200000400000 4194304 PASSED 00:21:01.638 free 0x200000a00000 4194304 00:21:01.638 unregister 0x200000800000 6291456 PASSED 00:21:01.638 malloc 8388608 00:21:01.638 register 0x200000400000 10485760 00:21:01.638 buf 0x200000600000 len 8388608 PASSED 00:21:01.638 free 0x200000600000 8388608 00:21:01.638 unregister 0x200000400000 10485760 PASSED 00:21:01.638 passed 00:21:01.638 00:21:01.638 Run Summary: Type Total Ran Passed Failed Inactive 00:21:01.638 suites 1 1 n/a 0 0 00:21:01.638 tests 1 1 1 0 0 00:21:01.638 asserts 15 15 15 0 n/a 00:21:01.638 00:21:01.638 Elapsed time = 0.009 seconds 00:21:01.638 00:21:01.638 real 0m0.141s 00:21:01.638 user 0m0.018s 00:21:01.638 sys 0m0.022s 00:21:01.638 00:47:04 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:01.638 00:47:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:21:01.638 ************************************ 00:21:01.638 END TEST env_mem_callbacks 00:21:01.638 ************************************ 00:21:01.638 00:21:01.638 real 0m3.022s 00:21:01.638 user 0m1.599s 00:21:01.638 sys 0m1.056s 00:21:01.638 00:47:04 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:01.638 ************************************ 00:21:01.638 END TEST env 00:21:01.638 ************************************ 00:21:01.638 00:47:04 env -- common/autotest_common.sh@10 -- # set +x 00:21:01.638 00:47:04 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:21:01.638 00:47:04 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:01.638 00:47:04 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:01.638 00:47:04 -- common/autotest_common.sh@10 -- # set +x 00:21:01.638 ************************************ 00:21:01.638 START TEST rpc 00:21:01.638 ************************************ 00:21:01.638 00:47:04 rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:21:01.896 * Looking for test storage... 00:21:01.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:21:01.896 00:47:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=73061 00:21:01.896 00:47:04 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:21:01.896 00:47:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:21:01.896 00:47:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 73061 00:21:01.896 00:47:04 rpc -- common/autotest_common.sh@828 -- # '[' -z 73061 ']' 00:21:01.896 00:47:04 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.896 00:47:04 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:01.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.896 00:47:04 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.896 00:47:04 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:01.896 00:47:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:21:01.896 [2024-05-15 00:47:05.050320] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:01.897 [2024-05-15 00:47:05.050445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73061 ] 00:21:02.155 [2024-05-15 00:47:05.195211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.155 [2024-05-15 00:47:05.326078] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:21:02.155 [2024-05-15 00:47:05.326151] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 73061' to capture a snapshot of events at runtime. 00:21:02.155 [2024-05-15 00:47:05.326166] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.155 [2024-05-15 00:47:05.326178] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.155 [2024-05-15 00:47:05.326187] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid73061 for offline analysis/debug. 00:21:02.155 [2024-05-15 00:47:05.326222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.090 00:47:06 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:03.090 00:47:06 rpc -- common/autotest_common.sh@861 -- # return 0 00:21:03.090 00:47:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:21:03.090 00:47:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:21:03.090 00:47:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:21:03.090 00:47:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:21:03.090 00:47:06 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:03.090 00:47:06 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:03.090 00:47:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:21:03.090 ************************************ 00:21:03.090 START TEST rpc_integrity 00:21:03.090 ************************************ 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:21:03.090 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.090 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:21:03.090 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:21:03.090 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:21:03.090 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.090 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:21:03.090 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.090 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:21:03.090 { 00:21:03.090 "aliases": [ 00:21:03.090 "2fca1ff6-f1ba-4200-83ae-42c98a8a2bde" 00:21:03.090 ], 00:21:03.090 "assigned_rate_limits": { 00:21:03.090 "r_mbytes_per_sec": 0, 00:21:03.090 "rw_ios_per_sec": 0, 00:21:03.090 "rw_mbytes_per_sec": 0, 00:21:03.090 "w_mbytes_per_sec": 0 00:21:03.090 }, 00:21:03.090 "block_size": 512, 00:21:03.090 "claimed": false, 00:21:03.090 "driver_specific": {}, 00:21:03.090 "memory_domains": [ 00:21:03.090 { 00:21:03.090 "dma_device_id": "system", 00:21:03.090 "dma_device_type": 1 00:21:03.090 }, 00:21:03.090 { 00:21:03.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.090 "dma_device_type": 2 00:21:03.090 } 00:21:03.090 ], 00:21:03.090 "name": "Malloc0", 00:21:03.090 "num_blocks": 16384, 00:21:03.090 "product_name": "Malloc disk", 00:21:03.090 "supported_io_types": { 00:21:03.090 "abort": true, 00:21:03.090 "compare": false, 00:21:03.090 "compare_and_write": false, 00:21:03.090 "flush": true, 00:21:03.090 "nvme_admin": false, 00:21:03.090 "nvme_io": false, 00:21:03.090 "read": true, 00:21:03.090 "reset": true, 00:21:03.090 "unmap": true, 00:21:03.090 "write": true, 00:21:03.090 "write_zeroes": true 00:21:03.090 }, 00:21:03.090 "uuid": "2fca1ff6-f1ba-4200-83ae-42c98a8a2bde", 00:21:03.090 "zoned": false 00:21:03.090 } 00:21:03.090 ]' 00:21:03.090 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:21:03.090 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:21:03.090 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:03.090 [2024-05-15 00:47:06.276988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:21:03.090 [2024-05-15 00:47:06.277050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.090 [2024-05-15 00:47:06.277072] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x133be70 00:21:03.090 [2024-05-15 00:47:06.277086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.090 [2024-05-15 00:47:06.279133] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.090 [2024-05-15 00:47:06.279172] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:21:03.090 Passthru0 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.090 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:03.090 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.090 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:21:03.090 { 00:21:03.090 "aliases": [ 00:21:03.090 "2fca1ff6-f1ba-4200-83ae-42c98a8a2bde" 00:21:03.090 ], 00:21:03.090 "assigned_rate_limits": { 00:21:03.090 "r_mbytes_per_sec": 0, 00:21:03.090 "rw_ios_per_sec": 0, 00:21:03.090 "rw_mbytes_per_sec": 0, 00:21:03.090 "w_mbytes_per_sec": 0 00:21:03.090 }, 00:21:03.090 "block_size": 512, 00:21:03.090 "claim_type": "exclusive_write", 00:21:03.090 "claimed": true, 00:21:03.090 "driver_specific": {}, 00:21:03.090 "memory_domains": [ 00:21:03.090 { 00:21:03.090 "dma_device_id": "system", 00:21:03.090 "dma_device_type": 1 00:21:03.090 }, 00:21:03.090 { 00:21:03.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.090 "dma_device_type": 2 00:21:03.090 } 00:21:03.090 ], 00:21:03.091 "name": "Malloc0", 00:21:03.091 "num_blocks": 16384, 00:21:03.091 "product_name": "Malloc disk", 00:21:03.091 "supported_io_types": { 00:21:03.091 "abort": true, 00:21:03.091 "compare": false, 00:21:03.091 "compare_and_write": false, 00:21:03.091 "flush": true, 00:21:03.091 "nvme_admin": false, 00:21:03.091 "nvme_io": false, 00:21:03.091 "read": true, 00:21:03.091 "reset": true, 00:21:03.091 "unmap": true, 00:21:03.091 "write": true, 00:21:03.091 "write_zeroes": true 00:21:03.091 }, 00:21:03.091 "uuid": "2fca1ff6-f1ba-4200-83ae-42c98a8a2bde", 00:21:03.091 "zoned": false 00:21:03.091 }, 00:21:03.091 { 00:21:03.091 "aliases": [ 00:21:03.091 "064cf8ab-b91b-5596-b6e8-feb9278b54c4" 00:21:03.091 ], 00:21:03.091 "assigned_rate_limits": { 00:21:03.091 "r_mbytes_per_sec": 0, 00:21:03.091 "rw_ios_per_sec": 0, 00:21:03.091 "rw_mbytes_per_sec": 0, 00:21:03.091 "w_mbytes_per_sec": 0 00:21:03.091 }, 00:21:03.091 "block_size": 512, 00:21:03.091 "claimed": false, 00:21:03.091 "driver_specific": { 00:21:03.091 "passthru": { 00:21:03.091 "base_bdev_name": "Malloc0", 00:21:03.091 "name": "Passthru0" 00:21:03.091 } 00:21:03.091 }, 00:21:03.091 "memory_domains": [ 00:21:03.091 { 00:21:03.091 "dma_device_id": "system", 00:21:03.091 "dma_device_type": 1 00:21:03.091 }, 00:21:03.091 { 00:21:03.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.091 "dma_device_type": 2 00:21:03.091 } 00:21:03.091 ], 00:21:03.091 "name": "Passthru0", 00:21:03.091 "num_blocks": 16384, 00:21:03.091 "product_name": "passthru", 00:21:03.091 "supported_io_types": { 00:21:03.091 "abort": true, 00:21:03.091 "compare": false, 00:21:03.091 "compare_and_write": false, 00:21:03.091 "flush": true, 00:21:03.091 "nvme_admin": false, 00:21:03.091 "nvme_io": false, 00:21:03.091 "read": true, 00:21:03.091 "reset": true, 00:21:03.091 "unmap": true, 00:21:03.091 "write": true, 00:21:03.091 "write_zeroes": true 00:21:03.091 }, 00:21:03.091 "uuid": "064cf8ab-b91b-5596-b6e8-feb9278b54c4", 00:21:03.091 "zoned": false 00:21:03.091 } 00:21:03.091 ]' 00:21:03.091 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:21:03.091 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:21:03.091 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:21:03.091 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.091 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:03.091 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.091 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:03.091 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.091 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:03.349 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.349 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:03.349 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.349 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:03.350 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.350 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:21:03.350 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:21:03.350 ************************************ 00:21:03.350 END TEST rpc_integrity 00:21:03.350 ************************************ 00:21:03.350 00:47:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:21:03.350 00:21:03.350 real 0m0.323s 00:21:03.350 user 0m0.211s 00:21:03.350 sys 0m0.036s 00:21:03.350 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:03.350 00:47:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:03.350 00:47:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:21:03.350 00:47:06 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:03.350 00:47:06 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:03.350 00:47:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:21:03.350 ************************************ 00:21:03.350 START TEST rpc_plugins 00:21:03.350 ************************************ 00:21:03.350 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:21:03.350 00:47:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:21:03.350 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.350 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:21:03.350 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.350 00:47:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:21:03.350 00:47:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:21:03.350 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.350 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:21:03.350 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.350 00:47:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:21:03.350 { 00:21:03.350 "aliases": [ 00:21:03.350 "bd42a286-f494-4dcb-9d32-1eec3111bc76" 00:21:03.350 ], 00:21:03.350 "assigned_rate_limits": { 00:21:03.350 "r_mbytes_per_sec": 0, 00:21:03.350 "rw_ios_per_sec": 0, 00:21:03.350 "rw_mbytes_per_sec": 0, 00:21:03.350 "w_mbytes_per_sec": 0 00:21:03.350 }, 00:21:03.350 "block_size": 4096, 00:21:03.350 "claimed": false, 00:21:03.350 "driver_specific": {}, 00:21:03.350 "memory_domains": [ 00:21:03.350 { 00:21:03.350 "dma_device_id": "system", 00:21:03.350 "dma_device_type": 1 00:21:03.350 }, 00:21:03.350 { 00:21:03.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.350 "dma_device_type": 2 00:21:03.350 } 00:21:03.350 ], 00:21:03.350 "name": "Malloc1", 00:21:03.350 "num_blocks": 256, 00:21:03.350 "product_name": "Malloc disk", 00:21:03.350 "supported_io_types": { 00:21:03.350 "abort": true, 00:21:03.350 "compare": false, 00:21:03.350 "compare_and_write": false, 00:21:03.350 "flush": true, 00:21:03.350 "nvme_admin": false, 00:21:03.350 "nvme_io": false, 00:21:03.350 "read": true, 00:21:03.350 "reset": true, 00:21:03.350 "unmap": true, 00:21:03.350 "write": true, 00:21:03.350 "write_zeroes": true 00:21:03.350 }, 00:21:03.350 "uuid": "bd42a286-f494-4dcb-9d32-1eec3111bc76", 00:21:03.350 "zoned": false 00:21:03.350 } 00:21:03.350 ]' 00:21:03.350 00:47:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:21:03.350 00:47:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:21:03.350 00:47:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:21:03.350 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.350 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:21:03.350 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.350 00:47:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:21:03.350 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.350 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:21:03.350 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.350 00:47:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:21:03.350 00:47:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:21:03.608 ************************************ 00:21:03.608 END TEST rpc_plugins 00:21:03.608 ************************************ 00:21:03.608 00:47:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:21:03.608 00:21:03.608 real 0m0.167s 00:21:03.608 user 0m0.106s 00:21:03.608 sys 0m0.021s 00:21:03.608 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:03.608 00:47:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:21:03.608 00:47:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:21:03.608 00:47:06 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:03.608 00:47:06 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:03.608 00:47:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:21:03.608 ************************************ 00:21:03.608 START TEST rpc_trace_cmd_test 00:21:03.608 ************************************ 00:21:03.608 00:47:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:21:03.608 00:47:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:21:03.608 00:47:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:21:03.608 00:47:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.608 00:47:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.608 00:47:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.608 00:47:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:21:03.608 "bdev": { 00:21:03.608 "mask": "0x8", 00:21:03.608 "tpoint_mask": "0xffffffffffffffff" 00:21:03.608 }, 00:21:03.608 "bdev_nvme": { 00:21:03.608 "mask": "0x4000", 00:21:03.608 "tpoint_mask": "0x0" 00:21:03.608 }, 00:21:03.608 "blobfs": { 00:21:03.608 "mask": "0x80", 00:21:03.608 "tpoint_mask": "0x0" 00:21:03.608 }, 00:21:03.608 "dsa": { 00:21:03.608 "mask": "0x200", 00:21:03.608 "tpoint_mask": "0x0" 00:21:03.608 }, 00:21:03.608 "ftl": { 00:21:03.608 "mask": "0x40", 00:21:03.608 "tpoint_mask": "0x0" 00:21:03.608 }, 00:21:03.608 "iaa": { 00:21:03.608 "mask": "0x1000", 00:21:03.608 "tpoint_mask": "0x0" 00:21:03.608 }, 00:21:03.608 "iscsi_conn": { 00:21:03.608 "mask": "0x2", 00:21:03.608 "tpoint_mask": "0x0" 00:21:03.608 }, 00:21:03.608 "nvme_pcie": { 00:21:03.608 "mask": "0x800", 00:21:03.608 "tpoint_mask": "0x0" 00:21:03.608 }, 00:21:03.608 "nvme_tcp": { 00:21:03.608 "mask": "0x2000", 00:21:03.608 "tpoint_mask": "0x0" 00:21:03.608 }, 00:21:03.608 "nvmf_rdma": { 00:21:03.608 "mask": "0x10", 00:21:03.608 "tpoint_mask": "0x0" 00:21:03.608 }, 00:21:03.608 "nvmf_tcp": { 00:21:03.608 "mask": "0x20", 00:21:03.608 "tpoint_mask": "0x0" 00:21:03.608 }, 00:21:03.608 "scsi": { 00:21:03.608 "mask": "0x4", 00:21:03.608 "tpoint_mask": "0x0" 00:21:03.608 }, 00:21:03.608 "sock": { 00:21:03.608 "mask": "0x8000", 00:21:03.608 "tpoint_mask": "0x0" 00:21:03.608 }, 00:21:03.608 "thread": { 00:21:03.609 "mask": "0x400", 00:21:03.609 "tpoint_mask": "0x0" 00:21:03.609 }, 00:21:03.609 "tpoint_group_mask": "0x8", 00:21:03.609 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid73061" 00:21:03.609 }' 00:21:03.609 00:47:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:21:03.609 00:47:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:21:03.609 00:47:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:21:03.609 00:47:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:21:03.609 00:47:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:21:03.867 00:47:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:21:03.867 00:47:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:21:03.867 00:47:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:21:03.867 00:47:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:21:03.867 ************************************ 00:21:03.867 END TEST rpc_trace_cmd_test 00:21:03.867 ************************************ 00:21:03.867 00:47:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:21:03.867 00:21:03.867 real 0m0.293s 00:21:03.867 user 0m0.249s 00:21:03.867 sys 0m0.030s 00:21:03.867 00:47:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:03.867 00:47:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.867 00:47:07 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:21:03.867 00:47:07 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:21:03.867 00:47:07 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:03.867 00:47:07 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:03.867 00:47:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:21:03.867 ************************************ 00:21:03.867 START TEST go_rpc 00:21:03.867 ************************************ 00:21:03.867 00:47:07 rpc.go_rpc -- common/autotest_common.sh@1122 -- # go_rpc 00:21:03.867 00:47:07 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:21:03.867 00:47:07 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:21:03.867 00:47:07 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:21:03.867 00:47:07 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:21:03.867 00:47:07 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:21:03.867 00:47:07 rpc.go_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.867 00:47:07 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:03.867 00:47:07 rpc.go_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.867 00:47:07 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:21:03.867 00:47:07 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:21:04.126 00:47:07 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["4fc652f0-1440-4e1a-a3f6-38a13b51d546"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"4fc652f0-1440-4e1a-a3f6-38a13b51d546","zoned":false}]' 00:21:04.126 00:47:07 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:21:04.126 00:47:07 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:21:04.126 00:47:07 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:21:04.126 00:47:07 rpc.go_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.126 00:47:07 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:04.126 00:47:07 rpc.go_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.126 00:47:07 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:21:04.126 00:47:07 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:21:04.126 00:47:07 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:21:04.126 00:47:07 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:21:04.126 ************************************ 00:21:04.126 END TEST go_rpc 00:21:04.126 ************************************ 00:21:04.126 00:21:04.126 real 0m0.232s 00:21:04.126 user 0m0.161s 00:21:04.126 sys 0m0.036s 00:21:04.126 00:47:07 rpc.go_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:04.126 00:47:07 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:04.126 00:47:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:21:04.126 00:47:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:21:04.126 00:47:07 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:04.126 00:47:07 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:04.126 00:47:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:21:04.126 ************************************ 00:21:04.126 START TEST rpc_daemon_integrity 00:21:04.126 ************************************ 00:21:04.126 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:21:04.126 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:04.126 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.126 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:04.126 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.126 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:21:04.126 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:21:04.385 { 00:21:04.385 "aliases": [ 00:21:04.385 "425db230-923f-443e-976b-92645a5afe40" 00:21:04.385 ], 00:21:04.385 "assigned_rate_limits": { 00:21:04.385 "r_mbytes_per_sec": 0, 00:21:04.385 "rw_ios_per_sec": 0, 00:21:04.385 "rw_mbytes_per_sec": 0, 00:21:04.385 "w_mbytes_per_sec": 0 00:21:04.385 }, 00:21:04.385 "block_size": 512, 00:21:04.385 "claimed": false, 00:21:04.385 "driver_specific": {}, 00:21:04.385 "memory_domains": [ 00:21:04.385 { 00:21:04.385 "dma_device_id": "system", 00:21:04.385 "dma_device_type": 1 00:21:04.385 }, 00:21:04.385 { 00:21:04.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.385 "dma_device_type": 2 00:21:04.385 } 00:21:04.385 ], 00:21:04.385 "name": "Malloc3", 00:21:04.385 "num_blocks": 16384, 00:21:04.385 "product_name": "Malloc disk", 00:21:04.385 "supported_io_types": { 00:21:04.385 "abort": true, 00:21:04.385 "compare": false, 00:21:04.385 "compare_and_write": false, 00:21:04.385 "flush": true, 00:21:04.385 "nvme_admin": false, 00:21:04.385 "nvme_io": false, 00:21:04.385 "read": true, 00:21:04.385 "reset": true, 00:21:04.385 "unmap": true, 00:21:04.385 "write": true, 00:21:04.385 "write_zeroes": true 00:21:04.385 }, 00:21:04.385 "uuid": "425db230-923f-443e-976b-92645a5afe40", 00:21:04.385 "zoned": false 00:21:04.385 } 00:21:04.385 ]' 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.385 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:04.385 [2024-05-15 00:47:07.517636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:21:04.385 [2024-05-15 00:47:07.517701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.385 [2024-05-15 00:47:07.517730] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1334fb0 00:21:04.386 [2024-05-15 00:47:07.517740] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.386 [2024-05-15 00:47:07.519685] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.386 [2024-05-15 00:47:07.519721] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:21:04.386 Passthru0 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:21:04.386 { 00:21:04.386 "aliases": [ 00:21:04.386 "425db230-923f-443e-976b-92645a5afe40" 00:21:04.386 ], 00:21:04.386 "assigned_rate_limits": { 00:21:04.386 "r_mbytes_per_sec": 0, 00:21:04.386 "rw_ios_per_sec": 0, 00:21:04.386 "rw_mbytes_per_sec": 0, 00:21:04.386 "w_mbytes_per_sec": 0 00:21:04.386 }, 00:21:04.386 "block_size": 512, 00:21:04.386 "claim_type": "exclusive_write", 00:21:04.386 "claimed": true, 00:21:04.386 "driver_specific": {}, 00:21:04.386 "memory_domains": [ 00:21:04.386 { 00:21:04.386 "dma_device_id": "system", 00:21:04.386 "dma_device_type": 1 00:21:04.386 }, 00:21:04.386 { 00:21:04.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.386 "dma_device_type": 2 00:21:04.386 } 00:21:04.386 ], 00:21:04.386 "name": "Malloc3", 00:21:04.386 "num_blocks": 16384, 00:21:04.386 "product_name": "Malloc disk", 00:21:04.386 "supported_io_types": { 00:21:04.386 "abort": true, 00:21:04.386 "compare": false, 00:21:04.386 "compare_and_write": false, 00:21:04.386 "flush": true, 00:21:04.386 "nvme_admin": false, 00:21:04.386 "nvme_io": false, 00:21:04.386 "read": true, 00:21:04.386 "reset": true, 00:21:04.386 "unmap": true, 00:21:04.386 "write": true, 00:21:04.386 "write_zeroes": true 00:21:04.386 }, 00:21:04.386 "uuid": "425db230-923f-443e-976b-92645a5afe40", 00:21:04.386 "zoned": false 00:21:04.386 }, 00:21:04.386 { 00:21:04.386 "aliases": [ 00:21:04.386 "16fa6164-2774-5722-82fa-eebf953059bd" 00:21:04.386 ], 00:21:04.386 "assigned_rate_limits": { 00:21:04.386 "r_mbytes_per_sec": 0, 00:21:04.386 "rw_ios_per_sec": 0, 00:21:04.386 "rw_mbytes_per_sec": 0, 00:21:04.386 "w_mbytes_per_sec": 0 00:21:04.386 }, 00:21:04.386 "block_size": 512, 00:21:04.386 "claimed": false, 00:21:04.386 "driver_specific": { 00:21:04.386 "passthru": { 00:21:04.386 "base_bdev_name": "Malloc3", 00:21:04.386 "name": "Passthru0" 00:21:04.386 } 00:21:04.386 }, 00:21:04.386 "memory_domains": [ 00:21:04.386 { 00:21:04.386 "dma_device_id": "system", 00:21:04.386 "dma_device_type": 1 00:21:04.386 }, 00:21:04.386 { 00:21:04.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.386 "dma_device_type": 2 00:21:04.386 } 00:21:04.386 ], 00:21:04.386 "name": "Passthru0", 00:21:04.386 "num_blocks": 16384, 00:21:04.386 "product_name": "passthru", 00:21:04.386 "supported_io_types": { 00:21:04.386 "abort": true, 00:21:04.386 "compare": false, 00:21:04.386 "compare_and_write": false, 00:21:04.386 "flush": true, 00:21:04.386 "nvme_admin": false, 00:21:04.386 "nvme_io": false, 00:21:04.386 "read": true, 00:21:04.386 "reset": true, 00:21:04.386 "unmap": true, 00:21:04.386 "write": true, 00:21:04.386 "write_zeroes": true 00:21:04.386 }, 00:21:04.386 "uuid": "16fa6164-2774-5722-82fa-eebf953059bd", 00:21:04.386 "zoned": false 00:21:04.386 } 00:21:04.386 ]' 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:21:04.386 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:21:04.644 ************************************ 00:21:04.644 END TEST rpc_daemon_integrity 00:21:04.644 ************************************ 00:21:04.644 00:47:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:21:04.644 00:21:04.644 real 0m0.348s 00:21:04.644 user 0m0.219s 00:21:04.644 sys 0m0.052s 00:21:04.644 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:04.644 00:47:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:04.644 00:47:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:04.644 00:47:07 rpc -- rpc/rpc.sh@84 -- # killprocess 73061 00:21:04.644 00:47:07 rpc -- common/autotest_common.sh@947 -- # '[' -z 73061 ']' 00:21:04.644 00:47:07 rpc -- common/autotest_common.sh@951 -- # kill -0 73061 00:21:04.644 00:47:07 rpc -- common/autotest_common.sh@952 -- # uname 00:21:04.644 00:47:07 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:04.644 00:47:07 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73061 00:21:04.644 00:47:07 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:04.644 killing process with pid 73061 00:21:04.644 00:47:07 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:04.644 00:47:07 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73061' 00:21:04.644 00:47:07 rpc -- common/autotest_common.sh@966 -- # kill 73061 00:21:04.644 00:47:07 rpc -- common/autotest_common.sh@971 -- # wait 73061 00:21:05.262 00:21:05.262 real 0m3.434s 00:21:05.262 user 0m4.419s 00:21:05.262 sys 0m0.908s 00:21:05.262 00:47:08 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:05.262 ************************************ 00:21:05.262 END TEST rpc 00:21:05.262 00:47:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:21:05.262 ************************************ 00:21:05.262 00:47:08 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:21:05.262 00:47:08 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:05.262 00:47:08 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:05.262 00:47:08 -- common/autotest_common.sh@10 -- # set +x 00:21:05.262 ************************************ 00:21:05.262 START TEST skip_rpc 00:21:05.262 ************************************ 00:21:05.262 00:47:08 skip_rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:21:05.262 * Looking for test storage... 00:21:05.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:21:05.262 00:47:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:21:05.262 00:47:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:21:05.262 00:47:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:21:05.262 00:47:08 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:05.262 00:47:08 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:05.262 00:47:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:05.262 ************************************ 00:21:05.262 START TEST skip_rpc 00:21:05.262 ************************************ 00:21:05.262 00:47:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:21:05.262 00:47:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=73323 00:21:05.262 00:47:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:21:05.262 00:47:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:21:05.262 00:47:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:21:05.262 [2024-05-15 00:47:08.534644] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:05.262 [2024-05-15 00:47:08.534766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73323 ] 00:21:05.521 [2024-05-15 00:47:08.668865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.521 [2024-05-15 00:47:08.772112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.794 2024/05/15 00:47:13 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 73323 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 73323 ']' 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 73323 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73323 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:10.794 killing process with pid 73323 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73323' 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 73323 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 73323 00:21:10.794 00:21:10.794 real 0m5.415s 00:21:10.794 user 0m5.037s 00:21:10.794 sys 0m0.280s 00:21:10.794 ************************************ 00:21:10.794 END TEST skip_rpc 00:21:10.794 ************************************ 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:10.794 00:47:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.794 00:47:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:21:10.794 00:47:13 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:10.794 00:47:13 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:10.794 00:47:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.794 ************************************ 00:21:10.794 START TEST skip_rpc_with_json 00:21:10.794 ************************************ 00:21:10.794 00:47:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:21:10.794 00:47:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:21:10.794 00:47:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=73411 00:21:10.794 00:47:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:10.794 00:47:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:21:10.794 00:47:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 73411 00:21:10.794 00:47:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 73411 ']' 00:21:10.794 00:47:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.794 00:47:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:10.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.794 00:47:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.794 00:47:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:10.794 00:47:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:21:10.794 [2024-05-15 00:47:14.034446] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:10.794 [2024-05-15 00:47:14.035142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73411 ] 00:21:11.053 [2024-05-15 00:47:14.179439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.053 [2024-05-15 00:47:14.271871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:21:11.989 [2024-05-15 00:47:15.072680] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:21:11.989 2024/05/15 00:47:15 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:21:11.989 request: 00:21:11.989 { 00:21:11.989 "method": "nvmf_get_transports", 00:21:11.989 "params": { 00:21:11.989 "trtype": "tcp" 00:21:11.989 } 00:21:11.989 } 00:21:11.989 Got JSON-RPC error response 00:21:11.989 GoRPCClient: error on JSON-RPC call 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:21:11.989 [2024-05-15 00:47:15.084759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.989 00:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:21:11.989 { 00:21:11.989 "subsystems": [ 00:21:11.989 { 00:21:11.989 "subsystem": "keyring", 00:21:11.989 "config": [] 00:21:11.989 }, 00:21:11.989 { 00:21:11.989 "subsystem": "iobuf", 00:21:11.989 "config": [ 00:21:11.989 { 00:21:11.989 "method": "iobuf_set_options", 00:21:11.989 "params": { 00:21:11.989 "large_bufsize": 135168, 00:21:11.989 "large_pool_count": 1024, 00:21:11.989 "small_bufsize": 8192, 00:21:11.989 "small_pool_count": 8192 00:21:11.989 } 00:21:11.989 } 00:21:11.989 ] 00:21:11.989 }, 00:21:11.989 { 00:21:11.989 "subsystem": "sock", 00:21:11.989 "config": [ 00:21:11.989 { 00:21:11.989 "method": "sock_impl_set_options", 00:21:11.989 "params": { 00:21:11.989 "enable_ktls": false, 00:21:11.989 "enable_placement_id": 0, 00:21:11.989 "enable_quickack": false, 00:21:11.989 "enable_recv_pipe": true, 00:21:11.989 "enable_zerocopy_send_client": false, 00:21:11.989 "enable_zerocopy_send_server": true, 00:21:11.989 "impl_name": "posix", 00:21:11.989 "recv_buf_size": 2097152, 00:21:11.989 "send_buf_size": 2097152, 00:21:11.989 "tls_version": 0, 00:21:11.989 "zerocopy_threshold": 0 00:21:11.989 } 00:21:11.989 }, 00:21:11.989 { 00:21:11.989 "method": "sock_impl_set_options", 00:21:11.989 "params": { 00:21:11.989 "enable_ktls": false, 00:21:11.989 "enable_placement_id": 0, 00:21:11.989 "enable_quickack": false, 00:21:11.989 "enable_recv_pipe": true, 00:21:11.989 "enable_zerocopy_send_client": false, 00:21:11.989 "enable_zerocopy_send_server": true, 00:21:11.989 "impl_name": "ssl", 00:21:11.989 "recv_buf_size": 4096, 00:21:11.989 "send_buf_size": 4096, 00:21:11.989 "tls_version": 0, 00:21:11.989 "zerocopy_threshold": 0 00:21:11.989 } 00:21:11.989 } 00:21:11.989 ] 00:21:11.989 }, 00:21:11.989 { 00:21:11.989 "subsystem": "vmd", 00:21:11.989 "config": [] 00:21:11.989 }, 00:21:11.989 { 00:21:11.989 "subsystem": "accel", 00:21:11.989 "config": [ 00:21:11.989 { 00:21:11.989 "method": "accel_set_options", 00:21:11.989 "params": { 00:21:11.989 "buf_count": 2048, 00:21:11.989 "large_cache_size": 16, 00:21:11.989 "sequence_count": 2048, 00:21:11.989 "small_cache_size": 128, 00:21:11.989 "task_count": 2048 00:21:11.989 } 00:21:11.989 } 00:21:11.989 ] 00:21:11.989 }, 00:21:11.989 { 00:21:11.989 "subsystem": "bdev", 00:21:11.989 "config": [ 00:21:11.989 { 00:21:11.989 "method": "bdev_set_options", 00:21:11.989 "params": { 00:21:11.989 "bdev_auto_examine": true, 00:21:11.989 "bdev_io_cache_size": 256, 00:21:11.989 "bdev_io_pool_size": 65535, 00:21:11.989 "iobuf_large_cache_size": 16, 00:21:11.989 "iobuf_small_cache_size": 128 00:21:11.989 } 00:21:11.989 }, 00:21:11.989 { 00:21:11.989 "method": "bdev_raid_set_options", 00:21:11.989 "params": { 00:21:11.989 "process_window_size_kb": 1024 00:21:11.989 } 00:21:11.989 }, 00:21:11.989 { 00:21:11.989 "method": "bdev_iscsi_set_options", 00:21:11.989 "params": { 00:21:11.989 "timeout_sec": 30 00:21:11.989 } 00:21:11.989 }, 00:21:11.989 { 00:21:11.989 "method": "bdev_nvme_set_options", 00:21:11.989 "params": { 00:21:11.989 "action_on_timeout": "none", 00:21:11.989 "allow_accel_sequence": false, 00:21:11.989 "arbitration_burst": 0, 00:21:11.989 "bdev_retry_count": 3, 00:21:11.989 "ctrlr_loss_timeout_sec": 0, 00:21:11.989 "delay_cmd_submit": true, 00:21:11.989 "dhchap_dhgroups": [ 00:21:11.989 "null", 00:21:11.989 "ffdhe2048", 00:21:11.989 "ffdhe3072", 00:21:11.989 "ffdhe4096", 00:21:11.989 "ffdhe6144", 00:21:11.989 "ffdhe8192" 00:21:11.989 ], 00:21:11.989 "dhchap_digests": [ 00:21:11.989 "sha256", 00:21:11.989 "sha384", 00:21:11.989 "sha512" 00:21:11.989 ], 00:21:11.989 "disable_auto_failback": false, 00:21:11.989 "fast_io_fail_timeout_sec": 0, 00:21:11.989 "generate_uuids": false, 00:21:11.989 "high_priority_weight": 0, 00:21:11.989 "io_path_stat": false, 00:21:11.989 "io_queue_requests": 0, 00:21:11.989 "keep_alive_timeout_ms": 10000, 00:21:11.989 "low_priority_weight": 0, 00:21:11.989 "medium_priority_weight": 0, 00:21:11.989 "nvme_adminq_poll_period_us": 10000, 00:21:11.989 "nvme_error_stat": false, 00:21:11.989 "nvme_ioq_poll_period_us": 0, 00:21:11.989 "rdma_cm_event_timeout_ms": 0, 00:21:11.989 "rdma_max_cq_size": 0, 00:21:11.989 "rdma_srq_size": 0, 00:21:11.989 "reconnect_delay_sec": 0, 00:21:11.989 "timeout_admin_us": 0, 00:21:11.989 "timeout_us": 0, 00:21:11.989 "transport_ack_timeout": 0, 00:21:11.989 "transport_retry_count": 4, 00:21:11.989 "transport_tos": 0 00:21:11.989 } 00:21:11.989 }, 00:21:11.989 { 00:21:11.989 "method": "bdev_nvme_set_hotplug", 00:21:11.989 "params": { 00:21:11.990 "enable": false, 00:21:11.990 "period_us": 100000 00:21:11.990 } 00:21:11.990 }, 00:21:11.990 { 00:21:11.990 "method": "bdev_wait_for_examine" 00:21:11.990 } 00:21:11.990 ] 00:21:11.990 }, 00:21:11.990 { 00:21:11.990 "subsystem": "scsi", 00:21:11.990 "config": null 00:21:11.990 }, 00:21:11.990 { 00:21:11.990 "subsystem": "scheduler", 00:21:11.990 "config": [ 00:21:11.990 { 00:21:11.990 "method": "framework_set_scheduler", 00:21:11.990 "params": { 00:21:11.990 "name": "static" 00:21:11.990 } 00:21:11.990 } 00:21:11.990 ] 00:21:11.990 }, 00:21:11.990 { 00:21:11.990 "subsystem": "vhost_scsi", 00:21:11.990 "config": [] 00:21:11.990 }, 00:21:11.990 { 00:21:11.990 "subsystem": "vhost_blk", 00:21:11.990 "config": [] 00:21:11.990 }, 00:21:11.990 { 00:21:11.990 "subsystem": "ublk", 00:21:11.990 "config": [] 00:21:11.990 }, 00:21:11.990 { 00:21:11.990 "subsystem": "nbd", 00:21:11.990 "config": [] 00:21:11.990 }, 00:21:11.990 { 00:21:11.990 "subsystem": "nvmf", 00:21:11.990 "config": [ 00:21:11.990 { 00:21:11.990 "method": "nvmf_set_config", 00:21:11.990 "params": { 00:21:11.990 "admin_cmd_passthru": { 00:21:11.990 "identify_ctrlr": false 00:21:11.990 }, 00:21:11.990 "discovery_filter": "match_any" 00:21:11.990 } 00:21:11.990 }, 00:21:11.990 { 00:21:11.990 "method": "nvmf_set_max_subsystems", 00:21:11.990 "params": { 00:21:11.990 "max_subsystems": 1024 00:21:11.990 } 00:21:11.990 }, 00:21:11.990 { 00:21:11.990 "method": "nvmf_set_crdt", 00:21:11.990 "params": { 00:21:11.990 "crdt1": 0, 00:21:11.990 "crdt2": 0, 00:21:11.990 "crdt3": 0 00:21:11.990 } 00:21:11.990 }, 00:21:11.990 { 00:21:11.990 "method": "nvmf_create_transport", 00:21:11.990 "params": { 00:21:11.990 "abort_timeout_sec": 1, 00:21:11.990 "ack_timeout": 0, 00:21:11.990 "buf_cache_size": 4294967295, 00:21:11.990 "c2h_success": true, 00:21:11.990 "data_wr_pool_size": 0, 00:21:11.990 "dif_insert_or_strip": false, 00:21:11.990 "in_capsule_data_size": 4096, 00:21:11.990 "io_unit_size": 131072, 00:21:11.990 "max_aq_depth": 128, 00:21:11.990 "max_io_qpairs_per_ctrlr": 127, 00:21:11.990 "max_io_size": 131072, 00:21:11.990 "max_queue_depth": 128, 00:21:11.990 "num_shared_buffers": 511, 00:21:11.990 "sock_priority": 0, 00:21:11.990 "trtype": "TCP", 00:21:11.990 "zcopy": false 00:21:11.990 } 00:21:11.990 } 00:21:11.990 ] 00:21:11.990 }, 00:21:11.990 { 00:21:11.990 "subsystem": "iscsi", 00:21:11.990 "config": [ 00:21:11.990 { 00:21:11.990 "method": "iscsi_set_options", 00:21:11.990 "params": { 00:21:11.990 "allow_duplicated_isid": false, 00:21:11.990 "chap_group": 0, 00:21:11.990 "data_out_pool_size": 2048, 00:21:11.990 "default_time2retain": 20, 00:21:11.990 "default_time2wait": 2, 00:21:11.990 "disable_chap": false, 00:21:11.990 "error_recovery_level": 0, 00:21:11.990 "first_burst_length": 8192, 00:21:11.990 "immediate_data": true, 00:21:11.990 "immediate_data_pool_size": 16384, 00:21:11.990 "max_connections_per_session": 2, 00:21:11.990 "max_large_datain_per_connection": 64, 00:21:11.990 "max_queue_depth": 64, 00:21:11.990 "max_r2t_per_connection": 4, 00:21:11.990 "max_sessions": 128, 00:21:11.990 "mutual_chap": false, 00:21:11.990 "node_base": "iqn.2016-06.io.spdk", 00:21:11.990 "nop_in_interval": 30, 00:21:11.990 "nop_timeout": 60, 00:21:11.990 "pdu_pool_size": 36864, 00:21:11.990 "require_chap": false 00:21:11.990 } 00:21:11.990 } 00:21:11.990 ] 00:21:11.990 } 00:21:11.990 ] 00:21:11.990 } 00:21:11.990 00:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:21:11.990 00:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 73411 00:21:11.990 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 73411 ']' 00:21:11.990 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 73411 00:21:11.990 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:21:11.990 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:11.990 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73411 00:21:11.990 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:11.990 killing process with pid 73411 00:21:11.990 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:11.990 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73411' 00:21:11.990 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 73411 00:21:11.990 00:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 73411 00:21:12.557 00:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=73456 00:21:12.557 00:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:21:12.557 00:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:21:17.822 00:47:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 73456 00:21:17.822 00:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 73456 ']' 00:21:17.822 00:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 73456 00:21:17.822 00:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:21:17.822 00:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:17.822 00:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73456 00:21:17.822 00:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:17.822 killing process with pid 73456 00:21:17.822 00:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:17.822 00:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73456' 00:21:17.822 00:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 73456 00:21:17.822 00:47:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 73456 00:21:18.092 00:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:21:18.092 00:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:21:18.092 00:21:18.092 real 0m7.425s 00:21:18.092 user 0m7.092s 00:21:18.092 sys 0m0.784s 00:21:18.092 00:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:18.092 ************************************ 00:21:18.092 END TEST skip_rpc_with_json 00:21:18.092 ************************************ 00:21:18.092 00:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:21:18.350 00:47:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:21:18.350 00:47:21 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:18.350 00:47:21 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:18.350 00:47:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.350 ************************************ 00:21:18.350 START TEST skip_rpc_with_delay 00:21:18.350 ************************************ 00:21:18.350 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:21:18.350 00:47:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:21:18.350 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:21:18.350 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:21:18.350 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:18.350 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:18.350 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:18.350 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:18.350 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:18.350 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:18.350 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:18.351 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:21:18.351 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:21:18.351 [2024-05-15 00:47:21.495213] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:21:18.351 [2024-05-15 00:47:21.495394] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:21:18.351 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:21:18.351 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:18.351 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:18.351 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:18.351 00:21:18.351 real 0m0.092s 00:21:18.351 user 0m0.062s 00:21:18.351 sys 0m0.029s 00:21:18.351 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:18.351 ************************************ 00:21:18.351 END TEST skip_rpc_with_delay 00:21:18.351 ************************************ 00:21:18.351 00:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:21:18.351 00:47:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:21:18.351 00:47:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:21:18.351 00:47:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:21:18.351 00:47:21 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:18.351 00:47:21 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:18.351 00:47:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:18.351 ************************************ 00:21:18.351 START TEST exit_on_failed_rpc_init 00:21:18.351 ************************************ 00:21:18.351 00:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:21:18.351 00:47:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=73571 00:21:18.351 00:47:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 73571 00:21:18.351 00:47:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:18.351 00:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 73571 ']' 00:21:18.351 00:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.351 00:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:18.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.351 00:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.351 00:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:18.351 00:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.351 [2024-05-15 00:47:21.629948] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:18.351 [2024-05-15 00:47:21.630070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73571 ] 00:21:18.610 [2024-05-15 00:47:21.764471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.610 [2024-05-15 00:47:21.888931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:21:19.545 00:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:21:19.545 [2024-05-15 00:47:22.698728] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:19.545 [2024-05-15 00:47:22.698883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73601 ] 00:21:19.804 [2024-05-15 00:47:22.852847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.804 [2024-05-15 00:47:22.959465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.804 [2024-05-15 00:47:22.959571] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:19.804 [2024-05-15 00:47:22.959588] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:19.804 [2024-05-15 00:47:22.959618] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 73571 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 73571 ']' 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 73571 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73571 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:19.804 killing process with pid 73571 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73571' 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 73571 00:21:19.804 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 73571 00:21:20.372 00:21:20.372 real 0m2.059s 00:21:20.372 user 0m2.286s 00:21:20.372 sys 0m0.551s 00:21:20.372 ************************************ 00:21:20.372 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:20.372 00:47:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:21:20.372 END TEST exit_on_failed_rpc_init 00:21:20.372 ************************************ 00:21:20.630 00:47:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:21:20.630 00:21:20.630 real 0m15.295s 00:21:20.630 user 0m14.579s 00:21:20.630 sys 0m1.837s 00:21:20.630 00:47:23 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:20.630 00:47:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.630 ************************************ 00:21:20.630 END TEST skip_rpc 00:21:20.630 ************************************ 00:21:20.630 00:47:23 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:21:20.630 00:47:23 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:20.630 00:47:23 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:20.630 00:47:23 -- common/autotest_common.sh@10 -- # set +x 00:21:20.630 ************************************ 00:21:20.630 START TEST rpc_client 00:21:20.630 ************************************ 00:21:20.630 00:47:23 rpc_client -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:21:20.630 * Looking for test storage... 00:21:20.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:21:20.630 00:47:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:21:20.630 OK 00:21:20.630 00:47:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:21:20.630 00:21:20.630 real 0m0.104s 00:21:20.630 user 0m0.051s 00:21:20.630 sys 0m0.059s 00:21:20.630 00:47:23 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:20.630 ************************************ 00:21:20.630 END TEST rpc_client 00:21:20.630 ************************************ 00:21:20.630 00:47:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:21:20.630 00:47:23 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:21:20.630 00:47:23 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:20.630 00:47:23 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:20.630 00:47:23 -- common/autotest_common.sh@10 -- # set +x 00:21:20.630 ************************************ 00:21:20.630 START TEST json_config 00:21:20.630 ************************************ 00:21:20.630 00:47:23 json_config -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:20.889 00:47:23 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.889 00:47:23 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.889 00:47:23 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.889 00:47:23 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.889 00:47:23 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.889 00:47:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.889 00:47:23 json_config -- paths/export.sh@5 -- # export PATH 00:21:20.889 00:47:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@47 -- # : 0 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:20.889 00:47:23 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:21:20.889 INFO: JSON configuration test init 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:21:20.889 00:47:23 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:20.889 00:47:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:21:20.889 00:47:23 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:20.889 00:47:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:20.889 00:47:23 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:21:20.889 00:47:23 json_config -- json_config/common.sh@9 -- # local app=target 00:21:20.889 00:47:23 json_config -- json_config/common.sh@10 -- # shift 00:21:20.889 00:47:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:21:20.889 00:47:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:21:20.889 00:47:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:21:20.889 00:47:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:21:20.889 00:47:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:21:20.889 00:47:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73719 00:21:20.889 Waiting for target to run... 00:21:20.889 00:47:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:21:20.889 00:47:23 json_config -- json_config/common.sh@25 -- # waitforlisten 73719 /var/tmp/spdk_tgt.sock 00:21:20.889 00:47:23 json_config -- common/autotest_common.sh@828 -- # '[' -z 73719 ']' 00:21:20.889 00:47:23 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:21:20.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:21:20.889 00:47:23 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:20.889 00:47:23 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:21:20.889 00:47:23 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:21:20.889 00:47:23 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:20.889 00:47:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:20.889 [2024-05-15 00:47:24.040625] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:20.889 [2024-05-15 00:47:24.040755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73719 ] 00:21:21.454 [2024-05-15 00:47:24.571417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.454 [2024-05-15 00:47:24.664313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.021 00:47:25 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:22.021 00:21:22.021 00:47:25 json_config -- common/autotest_common.sh@861 -- # return 0 00:21:22.021 00:47:25 json_config -- json_config/common.sh@26 -- # echo '' 00:21:22.021 00:47:25 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:21:22.021 00:47:25 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:21:22.021 00:47:25 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:22.021 00:47:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:22.021 00:47:25 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:21:22.021 00:47:25 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:21:22.021 00:47:25 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:22.021 00:47:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:22.021 00:47:25 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:21:22.021 00:47:25 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:21:22.021 00:47:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:21:22.279 00:47:25 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:21:22.279 00:47:25 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:21:22.280 00:47:25 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:22.280 00:47:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:22.538 00:47:25 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:21:22.538 00:47:25 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:21:22.538 00:47:25 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:21:22.538 00:47:25 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:21:22.538 00:47:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:21:22.539 00:47:25 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@48 -- # local get_types 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:21:22.797 00:47:25 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:22.797 00:47:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@55 -- # return 0 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:21:22.797 00:47:25 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:22.797 00:47:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:21:22.797 00:47:25 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:21:22.797 00:47:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:21:23.056 MallocForNvmf0 00:21:23.056 00:47:26 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:21:23.056 00:47:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:21:23.314 MallocForNvmf1 00:21:23.314 00:47:26 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:21:23.314 00:47:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:21:23.572 [2024-05-15 00:47:26.651009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.572 00:47:26 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.572 00:47:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.867 00:47:26 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:21:23.867 00:47:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:21:24.139 00:47:27 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:21:24.139 00:47:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:21:24.397 00:47:27 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:21:24.397 00:47:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:21:24.397 [2024-05-15 00:47:27.675288] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:24.397 [2024-05-15 00:47:27.675638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:24.655 00:47:27 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:21:24.655 00:47:27 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:24.655 00:47:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:24.655 00:47:27 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:21:24.655 00:47:27 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:24.655 00:47:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:24.655 00:47:27 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:21:24.655 00:47:27 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:21:24.655 00:47:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:21:24.914 MallocBdevForConfigChangeCheck 00:21:24.914 00:47:28 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:21:24.914 00:47:28 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:24.914 00:47:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:24.914 00:47:28 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:21:24.914 00:47:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:21:25.481 INFO: shutting down applications... 00:21:25.481 00:47:28 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:21:25.481 00:47:28 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:21:25.481 00:47:28 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:21:25.482 00:47:28 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:21:25.482 00:47:28 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:21:25.740 Calling clear_iscsi_subsystem 00:21:25.740 Calling clear_nvmf_subsystem 00:21:25.740 Calling clear_nbd_subsystem 00:21:25.740 Calling clear_ublk_subsystem 00:21:25.740 Calling clear_vhost_blk_subsystem 00:21:25.740 Calling clear_vhost_scsi_subsystem 00:21:25.740 Calling clear_bdev_subsystem 00:21:25.740 00:47:28 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:21:25.740 00:47:28 json_config -- json_config/json_config.sh@343 -- # count=100 00:21:25.740 00:47:28 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:21:25.740 00:47:28 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:21:25.740 00:47:28 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:21:25.740 00:47:28 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:21:25.998 00:47:29 json_config -- json_config/json_config.sh@345 -- # break 00:21:25.998 00:47:29 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:21:25.998 00:47:29 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:21:25.998 00:47:29 json_config -- json_config/common.sh@31 -- # local app=target 00:21:25.998 00:47:29 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:21:25.998 00:47:29 json_config -- json_config/common.sh@35 -- # [[ -n 73719 ]] 00:21:25.998 00:47:29 json_config -- json_config/common.sh@38 -- # kill -SIGINT 73719 00:21:25.998 [2024-05-15 00:47:29.280592] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:25.998 00:47:29 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:21:25.998 00:47:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:25.998 00:47:29 json_config -- json_config/common.sh@41 -- # kill -0 73719 00:21:25.998 00:47:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:21:26.564 00:47:29 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:21:26.564 00:47:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:26.564 00:47:29 json_config -- json_config/common.sh@41 -- # kill -0 73719 00:21:26.564 00:47:29 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:21:26.564 00:47:29 json_config -- json_config/common.sh@43 -- # break 00:21:26.564 00:47:29 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:21:26.564 SPDK target shutdown done 00:21:26.564 00:47:29 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:21:26.564 INFO: relaunching applications... 00:21:26.564 00:47:29 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:21:26.564 00:47:29 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:26.564 00:47:29 json_config -- json_config/common.sh@9 -- # local app=target 00:21:26.564 00:47:29 json_config -- json_config/common.sh@10 -- # shift 00:21:26.564 00:47:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:21:26.564 00:47:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:21:26.564 00:47:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:21:26.564 00:47:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:21:26.564 00:47:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:21:26.564 00:47:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73999 00:21:26.564 Waiting for target to run... 00:21:26.564 00:47:29 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:26.564 00:47:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:21:26.564 00:47:29 json_config -- json_config/common.sh@25 -- # waitforlisten 73999 /var/tmp/spdk_tgt.sock 00:21:26.564 00:47:29 json_config -- common/autotest_common.sh@828 -- # '[' -z 73999 ']' 00:21:26.564 00:47:29 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:21:26.564 00:47:29 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:26.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:21:26.564 00:47:29 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:21:26.564 00:47:29 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:26.564 00:47:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:26.822 [2024-05-15 00:47:29.863831] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:26.822 [2024-05-15 00:47:29.863970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73999 ] 00:21:27.410 [2024-05-15 00:47:30.382314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.410 [2024-05-15 00:47:30.453859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.669 [2024-05-15 00:47:30.754179] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.669 [2024-05-15 00:47:30.786077] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:27.669 [2024-05-15 00:47:30.786352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:27.669 00:47:30 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:27.669 00:21:27.669 00:47:30 json_config -- common/autotest_common.sh@861 -- # return 0 00:21:27.669 00:47:30 json_config -- json_config/common.sh@26 -- # echo '' 00:21:27.669 00:47:30 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:21:27.669 INFO: Checking if target configuration is the same... 00:21:27.669 00:47:30 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:21:27.669 00:47:30 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:27.669 00:47:30 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:21:27.669 00:47:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:21:27.669 + '[' 2 -ne 2 ']' 00:21:27.669 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:21:27.669 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:21:27.669 + rootdir=/home/vagrant/spdk_repo/spdk 00:21:27.669 +++ basename /dev/fd/62 00:21:27.669 ++ mktemp /tmp/62.XXX 00:21:27.669 + tmp_file_1=/tmp/62.nUe 00:21:27.669 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:27.669 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:21:27.669 + tmp_file_2=/tmp/spdk_tgt_config.json.5iV 00:21:27.669 + ret=0 00:21:27.669 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:21:28.235 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:21:28.235 + diff -u /tmp/62.nUe /tmp/spdk_tgt_config.json.5iV 00:21:28.235 + echo 'INFO: JSON config files are the same' 00:21:28.235 INFO: JSON config files are the same 00:21:28.235 + rm /tmp/62.nUe /tmp/spdk_tgt_config.json.5iV 00:21:28.235 + exit 0 00:21:28.235 00:47:31 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:21:28.235 INFO: changing configuration and checking if this can be detected... 00:21:28.235 00:47:31 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:21:28.235 00:47:31 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:21:28.235 00:47:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:21:28.493 00:47:31 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:28.494 00:47:31 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:21:28.494 00:47:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:21:28.494 + '[' 2 -ne 2 ']' 00:21:28.494 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:21:28.494 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:21:28.494 + rootdir=/home/vagrant/spdk_repo/spdk 00:21:28.494 +++ basename /dev/fd/62 00:21:28.494 ++ mktemp /tmp/62.XXX 00:21:28.494 + tmp_file_1=/tmp/62.Y9F 00:21:28.494 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:28.494 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:21:28.494 + tmp_file_2=/tmp/spdk_tgt_config.json.n4B 00:21:28.494 + ret=0 00:21:28.494 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:21:29.060 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:21:29.060 + diff -u /tmp/62.Y9F /tmp/spdk_tgt_config.json.n4B 00:21:29.060 + ret=1 00:21:29.060 + echo '=== Start of file: /tmp/62.Y9F ===' 00:21:29.060 + cat /tmp/62.Y9F 00:21:29.060 + echo '=== End of file: /tmp/62.Y9F ===' 00:21:29.060 + echo '' 00:21:29.060 + echo '=== Start of file: /tmp/spdk_tgt_config.json.n4B ===' 00:21:29.060 + cat /tmp/spdk_tgt_config.json.n4B 00:21:29.060 + echo '=== End of file: /tmp/spdk_tgt_config.json.n4B ===' 00:21:29.060 + echo '' 00:21:29.060 + rm /tmp/62.Y9F /tmp/spdk_tgt_config.json.n4B 00:21:29.060 + exit 1 00:21:29.060 INFO: configuration change detected. 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:21:29.060 00:47:32 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:29.060 00:47:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@317 -- # [[ -n 73999 ]] 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:21:29.060 00:47:32 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:29.060 00:47:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@193 -- # uname -s 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:21:29.060 00:47:32 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:29.060 00:47:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:29.060 00:47:32 json_config -- json_config/json_config.sh@323 -- # killprocess 73999 00:21:29.060 00:47:32 json_config -- common/autotest_common.sh@947 -- # '[' -z 73999 ']' 00:21:29.060 00:47:32 json_config -- common/autotest_common.sh@951 -- # kill -0 73999 00:21:29.060 00:47:32 json_config -- common/autotest_common.sh@952 -- # uname 00:21:29.060 00:47:32 json_config -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:29.061 00:47:32 json_config -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73999 00:21:29.061 00:47:32 json_config -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:29.061 00:47:32 json_config -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:29.061 killing process with pid 73999 00:21:29.061 00:47:32 json_config -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73999' 00:21:29.061 00:47:32 json_config -- common/autotest_common.sh@966 -- # kill 73999 00:21:29.061 [2024-05-15 00:47:32.240013] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:29.061 00:47:32 json_config -- common/autotest_common.sh@971 -- # wait 73999 00:21:29.319 00:47:32 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:29.319 00:47:32 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:21:29.319 00:47:32 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:29.319 00:47:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:29.319 00:47:32 json_config -- json_config/json_config.sh@328 -- # return 0 00:21:29.319 INFO: Success 00:21:29.319 00:47:32 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:21:29.319 00:21:29.319 real 0m8.625s 00:21:29.319 user 0m12.185s 00:21:29.319 sys 0m2.167s 00:21:29.319 00:47:32 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:29.319 ************************************ 00:21:29.319 END TEST json_config 00:21:29.319 00:47:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:29.319 ************************************ 00:21:29.319 00:47:32 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:21:29.319 00:47:32 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:29.319 00:47:32 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:29.319 00:47:32 -- common/autotest_common.sh@10 -- # set +x 00:21:29.319 ************************************ 00:21:29.319 START TEST json_config_extra_key 00:21:29.319 ************************************ 00:21:29.319 00:47:32 json_config_extra_key -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:21:29.578 00:47:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:29.578 00:47:32 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.578 00:47:32 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.578 00:47:32 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.578 00:47:32 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.578 00:47:32 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.578 00:47:32 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.578 00:47:32 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:21:29.578 00:47:32 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:29.578 00:47:32 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:29.579 00:47:32 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:29.579 00:47:32 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:29.579 00:47:32 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:29.579 00:47:32 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:29.579 00:47:32 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:29.579 00:47:32 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:21:29.579 00:47:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:21:29.579 00:47:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:21:29.579 00:47:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:21:29.579 00:47:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:21:29.579 00:47:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:21:29.579 00:47:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:21:29.579 00:47:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:21:29.579 00:47:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:21:29.579 00:47:32 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:21:29.579 INFO: launching applications... 00:21:29.579 00:47:32 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:21:29.579 00:47:32 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:21:29.579 00:47:32 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:21:29.579 00:47:32 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:21:29.579 00:47:32 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:21:29.579 00:47:32 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:21:29.579 00:47:32 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:21:29.579 00:47:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:21:29.579 00:47:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:21:29.579 00:47:32 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=74170 00:21:29.579 Waiting for target to run... 00:21:29.579 00:47:32 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:21:29.579 00:47:32 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 74170 /var/tmp/spdk_tgt.sock 00:21:29.579 00:47:32 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:21:29.579 00:47:32 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 74170 ']' 00:21:29.579 00:47:32 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:21:29.579 00:47:32 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:29.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:21:29.579 00:47:32 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:21:29.579 00:47:32 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:29.579 00:47:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:21:29.579 [2024-05-15 00:47:32.710146] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:29.579 [2024-05-15 00:47:32.710258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74170 ] 00:21:30.145 [2024-05-15 00:47:33.225223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.145 [2024-05-15 00:47:33.300891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.710 00:47:33 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:30.710 00:47:33 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:21:30.710 00:21:30.710 00:47:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:21:30.710 INFO: shutting down applications... 00:21:30.710 00:47:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:21:30.710 00:47:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:21:30.710 00:47:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:21:30.710 00:47:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:21:30.710 00:47:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 74170 ]] 00:21:30.710 00:47:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 74170 00:21:30.710 00:47:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:21:30.710 00:47:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:30.710 00:47:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74170 00:21:30.710 00:47:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:21:31.276 00:47:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:21:31.276 00:47:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:31.276 00:47:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74170 00:21:31.276 00:47:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:21:31.276 00:47:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:21:31.276 00:47:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:21:31.276 SPDK target shutdown done 00:21:31.276 00:47:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:21:31.276 Success 00:21:31.276 00:47:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:21:31.276 00:21:31.276 real 0m1.767s 00:21:31.276 user 0m1.648s 00:21:31.276 sys 0m0.555s 00:21:31.276 00:47:34 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:31.276 00:47:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:21:31.276 ************************************ 00:21:31.276 END TEST json_config_extra_key 00:21:31.276 ************************************ 00:21:31.276 00:47:34 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:21:31.276 00:47:34 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:31.276 00:47:34 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:31.276 00:47:34 -- common/autotest_common.sh@10 -- # set +x 00:21:31.276 ************************************ 00:21:31.276 START TEST alias_rpc 00:21:31.276 ************************************ 00:21:31.276 00:47:34 alias_rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:21:31.276 * Looking for test storage... 00:21:31.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:21:31.276 00:47:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:21:31.276 00:47:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74249 00:21:31.276 00:47:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:31.276 00:47:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74249 00:21:31.276 00:47:34 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 74249 ']' 00:21:31.276 00:47:34 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.276 00:47:34 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:31.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.276 00:47:34 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.276 00:47:34 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:31.276 00:47:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:31.276 [2024-05-15 00:47:34.521144] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:31.276 [2024-05-15 00:47:34.521250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74249 ] 00:21:31.534 [2024-05-15 00:47:34.657813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.534 [2024-05-15 00:47:34.752154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.470 00:47:35 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:32.470 00:47:35 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:21:32.470 00:47:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:21:32.728 00:47:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74249 00:21:32.728 00:47:35 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 74249 ']' 00:21:32.728 00:47:35 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 74249 00:21:32.728 00:47:35 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:21:32.728 00:47:35 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:32.728 00:47:35 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 74249 00:21:32.728 killing process with pid 74249 00:21:32.728 00:47:35 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:32.728 00:47:35 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:32.728 00:47:35 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 74249' 00:21:32.728 00:47:35 alias_rpc -- common/autotest_common.sh@966 -- # kill 74249 00:21:32.728 00:47:35 alias_rpc -- common/autotest_common.sh@971 -- # wait 74249 00:21:32.986 00:21:32.986 real 0m1.812s 00:21:32.986 user 0m2.053s 00:21:32.986 sys 0m0.466s 00:21:32.986 00:47:36 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:32.986 00:47:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:32.986 ************************************ 00:21:32.986 END TEST alias_rpc 00:21:32.986 ************************************ 00:21:32.986 00:47:36 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:21:32.986 00:47:36 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:21:32.986 00:47:36 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:32.986 00:47:36 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:32.986 00:47:36 -- common/autotest_common.sh@10 -- # set +x 00:21:32.986 ************************************ 00:21:32.986 START TEST dpdk_mem_utility 00:21:32.986 ************************************ 00:21:32.986 00:47:36 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:21:33.244 * Looking for test storage... 00:21:33.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:21:33.244 00:47:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:21:33.244 00:47:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=74340 00:21:33.244 00:47:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:33.244 00:47:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 74340 00:21:33.244 00:47:36 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 74340 ']' 00:21:33.244 00:47:36 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.244 00:47:36 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:33.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.244 00:47:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.244 00:47:36 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:33.244 00:47:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:21:33.244 [2024-05-15 00:47:36.389798] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:33.244 [2024-05-15 00:47:36.389902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74340 ] 00:21:33.244 [2024-05-15 00:47:36.529541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.503 [2024-05-15 00:47:36.624620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.438 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:34.438 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:21:34.438 00:47:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:21:34.438 00:47:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:21:34.438 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.438 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:21:34.438 { 00:21:34.438 "filename": "/tmp/spdk_mem_dump.txt" 00:21:34.438 } 00:21:34.438 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.438 00:47:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:21:34.438 DPDK memory size 814.000000 MiB in 1 heap(s) 00:21:34.438 1 heaps totaling size 814.000000 MiB 00:21:34.438 size: 814.000000 MiB heap id: 0 00:21:34.438 end heaps---------- 00:21:34.438 8 mempools totaling size 598.116089 MiB 00:21:34.438 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:21:34.438 size: 158.602051 MiB name: PDU_data_out_Pool 00:21:34.438 size: 84.521057 MiB name: bdev_io_74340 00:21:34.438 size: 51.011292 MiB name: evtpool_74340 00:21:34.438 size: 50.003479 MiB name: msgpool_74340 00:21:34.438 size: 21.763794 MiB name: PDU_Pool 00:21:34.438 size: 19.513306 MiB name: SCSI_TASK_Pool 00:21:34.438 size: 0.026123 MiB name: Session_Pool 00:21:34.438 end mempools------- 00:21:34.438 6 memzones totaling size 4.142822 MiB 00:21:34.438 size: 1.000366 MiB name: RG_ring_0_74340 00:21:34.438 size: 1.000366 MiB name: RG_ring_1_74340 00:21:34.438 size: 1.000366 MiB name: RG_ring_4_74340 00:21:34.438 size: 1.000366 MiB name: RG_ring_5_74340 00:21:34.438 size: 0.125366 MiB name: RG_ring_2_74340 00:21:34.438 size: 0.015991 MiB name: RG_ring_3_74340 00:21:34.438 end memzones------- 00:21:34.438 00:47:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:21:34.438 heap id: 0 total size: 814.000000 MiB number of busy elements: 218 number of free elements: 15 00:21:34.438 list of free elements. size: 12.486938 MiB 00:21:34.438 element at address: 0x200000400000 with size: 1.999512 MiB 00:21:34.438 element at address: 0x200018e00000 with size: 0.999878 MiB 00:21:34.438 element at address: 0x200019000000 with size: 0.999878 MiB 00:21:34.438 element at address: 0x200003e00000 with size: 0.996277 MiB 00:21:34.438 element at address: 0x200031c00000 with size: 0.994446 MiB 00:21:34.438 element at address: 0x200013800000 with size: 0.978699 MiB 00:21:34.438 element at address: 0x200007000000 with size: 0.959839 MiB 00:21:34.438 element at address: 0x200019200000 with size: 0.936584 MiB 00:21:34.438 element at address: 0x200000200000 with size: 0.837036 MiB 00:21:34.438 element at address: 0x20001aa00000 with size: 0.571716 MiB 00:21:34.438 element at address: 0x20000b200000 with size: 0.489990 MiB 00:21:34.438 element at address: 0x200000800000 with size: 0.487061 MiB 00:21:34.438 element at address: 0x200019400000 with size: 0.485657 MiB 00:21:34.438 element at address: 0x200027e00000 with size: 0.398682 MiB 00:21:34.438 element at address: 0x200003a00000 with size: 0.351685 MiB 00:21:34.438 list of standard malloc elements. size: 199.250488 MiB 00:21:34.438 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:21:34.438 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:21:34.438 element at address: 0x200018efff80 with size: 1.000122 MiB 00:21:34.439 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:21:34.439 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:21:34.439 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:21:34.439 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:21:34.439 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:21:34.439 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:21:34.439 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003adb300 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003adb500 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003affa80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003affb40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:21:34.439 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200027e66100 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200027e661c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200027e6cdc0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:21:34.439 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:21:34.440 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:21:34.440 list of memzone associated elements. size: 602.262573 MiB 00:21:34.440 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:21:34.440 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:21:34.440 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:21:34.440 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:21:34.440 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:21:34.440 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_74340_0 00:21:34.440 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:21:34.440 associated memzone info: size: 48.002930 MiB name: MP_evtpool_74340_0 00:21:34.440 element at address: 0x200003fff380 with size: 48.003052 MiB 00:21:34.440 associated memzone info: size: 48.002930 MiB name: MP_msgpool_74340_0 00:21:34.440 element at address: 0x2000195be940 with size: 20.255554 MiB 00:21:34.440 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:21:34.440 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:21:34.440 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:21:34.440 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:21:34.440 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_74340 00:21:34.440 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:21:34.440 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_74340 00:21:34.440 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:21:34.440 associated memzone info: size: 1.007996 MiB name: MP_evtpool_74340 00:21:34.440 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:21:34.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:21:34.440 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:21:34.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:21:34.440 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:21:34.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:21:34.440 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:21:34.440 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:21:34.440 element at address: 0x200003eff180 with size: 1.000488 MiB 00:21:34.440 associated memzone info: size: 1.000366 MiB name: RG_ring_0_74340 00:21:34.440 element at address: 0x200003affc00 with size: 1.000488 MiB 00:21:34.440 associated memzone info: size: 1.000366 MiB name: RG_ring_1_74340 00:21:34.440 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:21:34.440 associated memzone info: size: 1.000366 MiB name: RG_ring_4_74340 00:21:34.440 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:21:34.440 associated memzone info: size: 1.000366 MiB name: RG_ring_5_74340 00:21:34.440 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:21:34.440 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_74340 00:21:34.440 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:21:34.440 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:21:34.440 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:21:34.440 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:21:34.440 element at address: 0x20001947c540 with size: 0.250488 MiB 00:21:34.440 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:21:34.440 element at address: 0x200003adf880 with size: 0.125488 MiB 00:21:34.440 associated memzone info: size: 0.125366 MiB name: RG_ring_2_74340 00:21:34.440 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:21:34.440 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:21:34.440 element at address: 0x200027e66280 with size: 0.023743 MiB 00:21:34.440 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:21:34.440 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:21:34.440 associated memzone info: size: 0.015991 MiB name: RG_ring_3_74340 00:21:34.440 element at address: 0x200027e6c3c0 with size: 0.002441 MiB 00:21:34.440 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:21:34.440 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:21:34.440 associated memzone info: size: 0.000183 MiB name: MP_msgpool_74340 00:21:34.440 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:21:34.440 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_74340 00:21:34.440 element at address: 0x200027e6ce80 with size: 0.000305 MiB 00:21:34.440 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:21:34.440 00:47:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:21:34.440 00:47:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 74340 00:21:34.440 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 74340 ']' 00:21:34.440 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 74340 00:21:34.440 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:21:34.440 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:34.440 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 74340 00:21:34.441 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:34.441 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:34.441 killing process with pid 74340 00:21:34.441 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 74340' 00:21:34.441 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 74340 00:21:34.441 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 74340 00:21:34.699 00:21:34.699 real 0m1.733s 00:21:34.699 user 0m1.950s 00:21:34.699 sys 0m0.418s 00:21:34.699 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:34.699 00:47:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:21:34.699 ************************************ 00:21:34.699 END TEST dpdk_mem_utility 00:21:34.699 ************************************ 00:21:34.957 00:47:38 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:21:34.957 00:47:38 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:34.957 00:47:38 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:34.957 00:47:38 -- common/autotest_common.sh@10 -- # set +x 00:21:34.957 ************************************ 00:21:34.957 START TEST event 00:21:34.957 ************************************ 00:21:34.957 00:47:38 event -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:21:34.957 * Looking for test storage... 00:21:34.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:21:34.957 00:47:38 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:34.957 00:47:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:21:34.957 00:47:38 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:21:34.957 00:47:38 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:21:34.957 00:47:38 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:34.957 00:47:38 event -- common/autotest_common.sh@10 -- # set +x 00:21:34.957 ************************************ 00:21:34.957 START TEST event_perf 00:21:34.957 ************************************ 00:21:34.957 00:47:38 event.event_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:21:34.957 Running I/O for 1 seconds...[2024-05-15 00:47:38.145196] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:34.957 [2024-05-15 00:47:38.145306] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74435 ] 00:21:35.215 [2024-05-15 00:47:38.280806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.215 [2024-05-15 00:47:38.384307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.215 [2024-05-15 00:47:38.384515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.215 [2024-05-15 00:47:38.384676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.215 [2024-05-15 00:47:38.384880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.591 Running I/O for 1 seconds... 00:21:36.591 lcore 0: 116801 00:21:36.591 lcore 1: 116798 00:21:36.591 lcore 2: 116799 00:21:36.591 lcore 3: 116799 00:21:36.591 done. 00:21:36.591 00:21:36.591 real 0m1.334s 00:21:36.591 user 0m4.145s 00:21:36.591 sys 0m0.063s 00:21:36.591 00:47:39 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:36.591 00:47:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:21:36.591 ************************************ 00:21:36.591 END TEST event_perf 00:21:36.591 ************************************ 00:21:36.591 00:47:39 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:21:36.591 00:47:39 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:21:36.591 00:47:39 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:36.591 00:47:39 event -- common/autotest_common.sh@10 -- # set +x 00:21:36.591 ************************************ 00:21:36.591 START TEST event_reactor 00:21:36.591 ************************************ 00:21:36.591 00:47:39 event.event_reactor -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:21:36.591 [2024-05-15 00:47:39.532060] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:36.591 [2024-05-15 00:47:39.532780] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74468 ] 00:21:36.591 [2024-05-15 00:47:39.672811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.591 [2024-05-15 00:47:39.769278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.969 test_start 00:21:37.969 oneshot 00:21:37.969 tick 100 00:21:37.969 tick 100 00:21:37.969 tick 250 00:21:37.969 tick 100 00:21:37.969 tick 100 00:21:37.969 tick 250 00:21:37.969 tick 100 00:21:37.969 tick 500 00:21:37.969 tick 100 00:21:37.969 tick 100 00:21:37.969 tick 250 00:21:37.969 tick 100 00:21:37.969 tick 100 00:21:37.969 test_end 00:21:37.969 00:21:37.969 real 0m1.328s 00:21:37.969 user 0m1.159s 00:21:37.969 sys 0m0.062s 00:21:37.969 00:47:40 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:37.969 00:47:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:21:37.969 ************************************ 00:21:37.969 END TEST event_reactor 00:21:37.969 ************************************ 00:21:37.969 00:47:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:21:37.969 00:47:40 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:21:37.969 00:47:40 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:37.969 00:47:40 event -- common/autotest_common.sh@10 -- # set +x 00:21:37.969 ************************************ 00:21:37.969 START TEST event_reactor_perf 00:21:37.969 ************************************ 00:21:37.970 00:47:40 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:21:37.970 [2024-05-15 00:47:40.914440] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:37.970 [2024-05-15 00:47:40.914536] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74504 ] 00:21:37.970 [2024-05-15 00:47:41.050735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.970 [2024-05-15 00:47:41.147061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.358 test_start 00:21:39.358 test_end 00:21:39.358 Performance: 369113 events per second 00:21:39.358 00:21:39.358 real 0m1.325s 00:21:39.358 user 0m1.162s 00:21:39.358 sys 0m0.055s 00:21:39.358 00:47:42 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:39.358 00:47:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:21:39.358 ************************************ 00:21:39.358 END TEST event_reactor_perf 00:21:39.358 ************************************ 00:21:39.358 00:47:42 event -- event/event.sh@49 -- # uname -s 00:21:39.358 00:47:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:21:39.358 00:47:42 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:21:39.358 00:47:42 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:39.358 00:47:42 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:39.358 00:47:42 event -- common/autotest_common.sh@10 -- # set +x 00:21:39.358 ************************************ 00:21:39.358 START TEST event_scheduler 00:21:39.358 ************************************ 00:21:39.358 00:47:42 event.event_scheduler -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:21:39.358 * Looking for test storage... 00:21:39.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:21:39.358 00:47:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:21:39.358 00:47:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=74565 00:21:39.358 00:47:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:21:39.358 00:47:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:21:39.358 00:47:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 74565 00:21:39.358 00:47:42 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 74565 ']' 00:21:39.358 00:47:42 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.358 00:47:42 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:39.358 00:47:42 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.358 00:47:42 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:39.358 00:47:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:39.358 [2024-05-15 00:47:42.414897] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:39.359 [2024-05-15 00:47:42.415019] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74565 ] 00:21:39.359 [2024-05-15 00:47:42.554163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.617 [2024-05-15 00:47:42.689564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.617 [2024-05-15 00:47:42.689737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.617 [2024-05-15 00:47:42.689887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.617 [2024-05-15 00:47:42.689907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.186 00:47:43 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:40.186 00:47:43 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:21:40.186 00:47:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:21:40.186 00:47:43 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.186 00:47:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:40.186 POWER: Env isn't set yet! 00:21:40.186 POWER: Attempting to initialise ACPI cpufreq power management... 00:21:40.186 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:40.186 POWER: Cannot set governor of lcore 0 to userspace 00:21:40.186 POWER: Attempting to initialise PSTAT power management... 00:21:40.186 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:40.186 POWER: Cannot set governor of lcore 0 to performance 00:21:40.186 POWER: Attempting to initialise AMD PSTATE power management... 00:21:40.186 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:40.186 POWER: Cannot set governor of lcore 0 to userspace 00:21:40.186 POWER: Attempting to initialise CPPC power management... 00:21:40.186 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:40.186 POWER: Cannot set governor of lcore 0 to userspace 00:21:40.186 POWER: Attempting to initialise VM power management... 00:21:40.186 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:21:40.186 POWER: Unable to set Power Management Environment for lcore 0 00:21:40.186 [2024-05-15 00:47:43.461617] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:21:40.186 [2024-05-15 00:47:43.461735] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:21:40.186 [2024-05-15 00:47:43.461831] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:21:40.186 00:47:43 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.186 00:47:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:21:40.186 00:47:43 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.186 00:47:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:40.444 [2024-05-15 00:47:43.581245] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:21:40.444 00:47:43 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.445 00:47:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:21:40.445 00:47:43 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:40.445 00:47:43 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:40.445 00:47:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 ************************************ 00:21:40.445 START TEST scheduler_create_thread 00:21:40.445 ************************************ 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 2 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 3 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 4 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 5 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 6 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 7 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 8 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 9 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 10 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.445 00:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:42.349 00:47:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:42.349 00:47:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:21:42.349 00:47:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:21:42.349 00:47:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.349 00:47:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:43.286 ************************************ 00:21:43.286 END TEST scheduler_create_thread 00:21:43.286 ************************************ 00:21:43.286 00:47:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.286 00:21:43.286 real 0m2.617s 00:21:43.286 user 0m0.023s 00:21:43.286 sys 0m0.003s 00:21:43.286 00:47:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:43.286 00:47:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:43.287 00:47:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:43.287 00:47:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 74565 00:21:43.287 00:47:46 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 74565 ']' 00:21:43.287 00:47:46 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 74565 00:21:43.287 00:47:46 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:21:43.287 00:47:46 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:43.287 00:47:46 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 74565 00:21:43.287 killing process with pid 74565 00:21:43.287 00:47:46 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:21:43.287 00:47:46 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:21:43.287 00:47:46 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 74565' 00:21:43.287 00:47:46 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 74565 00:21:43.287 00:47:46 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 74565 00:21:43.546 [2024-05-15 00:47:46.691374] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:21:43.805 00:21:43.805 real 0m4.717s 00:21:43.805 user 0m8.916s 00:21:43.805 sys 0m0.423s 00:21:43.805 00:47:46 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:43.805 ************************************ 00:21:43.805 END TEST event_scheduler 00:21:43.805 ************************************ 00:21:43.805 00:47:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:43.805 00:47:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:21:43.805 00:47:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:21:43.805 00:47:47 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:21:43.805 00:47:47 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:43.805 00:47:47 event -- common/autotest_common.sh@10 -- # set +x 00:21:43.805 ************************************ 00:21:43.805 START TEST app_repeat 00:21:43.805 ************************************ 00:21:43.805 00:47:47 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=74684 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:21:43.805 Process app_repeat pid: 74684 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 74684' 00:21:43.805 spdk_app_start Round 0 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:21:43.805 00:47:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74684 /var/tmp/spdk-nbd.sock 00:21:43.805 00:47:47 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 74684 ']' 00:21:43.805 00:47:47 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:43.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:43.805 00:47:47 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:43.805 00:47:47 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:43.805 00:47:47 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:43.805 00:47:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:43.805 [2024-05-15 00:47:47.082416] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:21:43.805 [2024-05-15 00:47:47.082920] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74684 ] 00:21:44.064 [2024-05-15 00:47:47.221708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:44.064 [2024-05-15 00:47:47.318633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.064 [2024-05-15 00:47:47.318666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.324 00:47:47 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:44.324 00:47:47 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:21:44.324 00:47:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:44.582 Malloc0 00:21:44.582 00:47:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:44.842 Malloc1 00:21:44.842 00:47:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:44.842 00:47:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:21:45.101 /dev/nbd0 00:21:45.101 00:47:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:45.101 00:47:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:45.101 00:47:48 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:21:45.102 00:47:48 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:21:45.102 00:47:48 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:21:45.102 00:47:48 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:21:45.102 00:47:48 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:21:45.102 00:47:48 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:21:45.102 00:47:48 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:21:45.102 00:47:48 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:21:45.102 00:47:48 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:45.102 1+0 records in 00:21:45.102 1+0 records out 00:21:45.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301298 s, 13.6 MB/s 00:21:45.102 00:47:48 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:45.102 00:47:48 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:21:45.102 00:47:48 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:45.102 00:47:48 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:21:45.102 00:47:48 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:21:45.102 00:47:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:45.102 00:47:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:45.102 00:47:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:21:45.361 /dev/nbd1 00:21:45.361 00:47:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:45.361 00:47:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:45.361 1+0 records in 00:21:45.361 1+0 records out 00:21:45.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427501 s, 9.6 MB/s 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:21:45.361 00:47:48 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:21:45.361 00:47:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:45.361 00:47:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:45.361 00:47:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:45.361 00:47:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:45.361 00:47:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:45.620 00:47:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:45.620 { 00:21:45.620 "bdev_name": "Malloc0", 00:21:45.620 "nbd_device": "/dev/nbd0" 00:21:45.620 }, 00:21:45.620 { 00:21:45.620 "bdev_name": "Malloc1", 00:21:45.620 "nbd_device": "/dev/nbd1" 00:21:45.620 } 00:21:45.620 ]' 00:21:45.620 00:47:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:45.620 { 00:21:45.620 "bdev_name": "Malloc0", 00:21:45.620 "nbd_device": "/dev/nbd0" 00:21:45.620 }, 00:21:45.620 { 00:21:45.620 "bdev_name": "Malloc1", 00:21:45.620 "nbd_device": "/dev/nbd1" 00:21:45.620 } 00:21:45.620 ]' 00:21:45.620 00:47:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:45.878 /dev/nbd1' 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:45.878 /dev/nbd1' 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:21:45.878 256+0 records in 00:21:45.878 256+0 records out 00:21:45.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00814175 s, 129 MB/s 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:45.878 256+0 records in 00:21:45.878 256+0 records out 00:21:45.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025393 s, 41.3 MB/s 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:45.878 256+0 records in 00:21:45.878 256+0 records out 00:21:45.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283565 s, 37.0 MB/s 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:45.878 00:47:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:21:45.878 00:47:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:45.878 00:47:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:21:45.878 00:47:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:45.878 00:47:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:21:45.878 00:47:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:45.878 00:47:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:45.878 00:47:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:45.878 00:47:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:21:45.878 00:47:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:45.878 00:47:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:46.137 00:47:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:46.137 00:47:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:46.137 00:47:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:46.137 00:47:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:46.137 00:47:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:46.137 00:47:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:46.137 00:47:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:46.137 00:47:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:46.137 00:47:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:46.137 00:47:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:46.395 00:47:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:46.395 00:47:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:46.395 00:47:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:46.395 00:47:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:46.395 00:47:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:46.395 00:47:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:46.395 00:47:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:46.395 00:47:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:46.395 00:47:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:46.395 00:47:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:46.395 00:47:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:46.653 00:47:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:46.653 00:47:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:46.653 00:47:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:46.653 00:47:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:46.653 00:47:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:21:46.653 00:47:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:46.653 00:47:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:21:46.653 00:47:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:21:46.653 00:47:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:21:46.653 00:47:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:21:46.653 00:47:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:46.653 00:47:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:21:46.653 00:47:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:21:46.911 00:47:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:21:47.169 [2024-05-15 00:47:50.355503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:47.169 [2024-05-15 00:47:50.451175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.169 [2024-05-15 00:47:50.451187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.428 [2024-05-15 00:47:50.504910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:21:47.428 [2024-05-15 00:47:50.504968] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:21:49.955 00:47:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:21:49.955 spdk_app_start Round 1 00:21:49.955 00:47:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:21:49.955 00:47:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74684 /var/tmp/spdk-nbd.sock 00:21:49.955 00:47:53 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 74684 ']' 00:21:49.955 00:47:53 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:49.955 00:47:53 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:49.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:49.955 00:47:53 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:49.955 00:47:53 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:49.955 00:47:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:50.212 00:47:53 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:50.212 00:47:53 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:21:50.212 00:47:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:50.777 Malloc0 00:21:50.777 00:47:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:51.035 Malloc1 00:21:51.035 00:47:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:51.035 00:47:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:21:51.293 /dev/nbd0 00:21:51.293 00:47:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:51.293 00:47:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:51.293 1+0 records in 00:21:51.293 1+0 records out 00:21:51.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244193 s, 16.8 MB/s 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:21:51.293 00:47:54 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:21:51.293 00:47:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:51.293 00:47:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:51.293 00:47:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:21:51.551 /dev/nbd1 00:21:51.551 00:47:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:51.551 00:47:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:51.551 1+0 records in 00:21:51.551 1+0 records out 00:21:51.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328403 s, 12.5 MB/s 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:21:51.551 00:47:54 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:21:51.551 00:47:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:51.551 00:47:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:51.551 00:47:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:51.551 00:47:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:51.551 00:47:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:51.809 00:47:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:51.809 { 00:21:51.809 "bdev_name": "Malloc0", 00:21:51.809 "nbd_device": "/dev/nbd0" 00:21:51.809 }, 00:21:51.809 { 00:21:51.809 "bdev_name": "Malloc1", 00:21:51.809 "nbd_device": "/dev/nbd1" 00:21:51.809 } 00:21:51.809 ]' 00:21:51.809 00:47:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:51.809 { 00:21:51.809 "bdev_name": "Malloc0", 00:21:51.809 "nbd_device": "/dev/nbd0" 00:21:51.809 }, 00:21:51.809 { 00:21:51.809 "bdev_name": "Malloc1", 00:21:51.809 "nbd_device": "/dev/nbd1" 00:21:51.809 } 00:21:51.809 ]' 00:21:51.809 00:47:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:51.809 00:47:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:51.809 /dev/nbd1' 00:21:51.809 00:47:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:51.809 /dev/nbd1' 00:21:51.809 00:47:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:51.809 00:47:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:21:51.809 00:47:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:21:51.809 00:47:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:21:51.809 00:47:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:21:51.809 00:47:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:21:51.809 00:47:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:51.809 00:47:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:51.809 00:47:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:51.809 00:47:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:21:51.810 256+0 records in 00:21:51.810 256+0 records out 00:21:51.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00892374 s, 118 MB/s 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:51.810 256+0 records in 00:21:51.810 256+0 records out 00:21:51.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261992 s, 40.0 MB/s 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:51.810 256+0 records in 00:21:51.810 256+0 records out 00:21:51.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297885 s, 35.2 MB/s 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:51.810 00:47:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:21:52.067 00:47:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:52.067 00:47:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:21:52.067 00:47:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:52.067 00:47:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:21:52.067 00:47:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:52.067 00:47:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:52.067 00:47:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:52.067 00:47:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:21:52.067 00:47:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:52.067 00:47:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:52.326 00:47:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:52.326 00:47:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:52.326 00:47:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:52.326 00:47:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:52.326 00:47:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:52.326 00:47:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:52.326 00:47:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:52.326 00:47:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:52.326 00:47:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:52.326 00:47:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:52.583 00:47:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:52.583 00:47:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:52.583 00:47:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:52.583 00:47:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:52.583 00:47:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:52.583 00:47:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:52.583 00:47:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:52.583 00:47:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:52.583 00:47:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:52.583 00:47:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:52.583 00:47:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:52.842 00:47:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:52.842 00:47:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:52.842 00:47:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:52.842 00:47:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:52.842 00:47:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:21:52.842 00:47:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:52.842 00:47:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:21:52.842 00:47:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:21:52.842 00:47:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:21:52.842 00:47:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:21:52.842 00:47:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:52.842 00:47:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:21:52.842 00:47:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:21:53.408 00:47:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:21:53.408 [2024-05-15 00:47:56.562078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:53.408 [2024-05-15 00:47:56.654351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.408 [2024-05-15 00:47:56.654362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.666 [2024-05-15 00:47:56.709358] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:21:53.666 [2024-05-15 00:47:56.709418] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:21:56.219 00:47:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:21:56.219 spdk_app_start Round 2 00:21:56.219 00:47:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:21:56.219 00:47:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74684 /var/tmp/spdk-nbd.sock 00:21:56.219 00:47:59 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 74684 ']' 00:21:56.219 00:47:59 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:56.219 00:47:59 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:56.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:56.219 00:47:59 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:56.219 00:47:59 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:56.219 00:47:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:56.477 00:47:59 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:56.477 00:47:59 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:21:56.477 00:47:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:56.736 Malloc0 00:21:56.736 00:47:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:56.994 Malloc1 00:21:56.994 00:48:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.994 00:48:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:21:57.253 /dev/nbd0 00:21:57.253 00:48:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:57.253 00:48:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:57.253 00:48:00 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:21:57.253 00:48:00 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:21:57.253 00:48:00 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:21:57.253 00:48:00 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:21:57.253 00:48:00 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:21:57.253 00:48:00 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:21:57.253 00:48:00 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:21:57.253 00:48:00 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:21:57.253 00:48:00 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:57.253 1+0 records in 00:21:57.253 1+0 records out 00:21:57.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382543 s, 10.7 MB/s 00:21:57.253 00:48:00 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:57.254 00:48:00 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:21:57.254 00:48:00 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:57.254 00:48:00 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:21:57.254 00:48:00 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:21:57.254 00:48:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:57.254 00:48:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:57.254 00:48:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:21:57.512 /dev/nbd1 00:21:57.512 00:48:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:57.512 00:48:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:57.512 1+0 records in 00:21:57.512 1+0 records out 00:21:57.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530826 s, 7.7 MB/s 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:21:57.512 00:48:00 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:21:57.512 00:48:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:57.512 00:48:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:57.512 00:48:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:57.512 00:48:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:57.512 00:48:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:57.771 00:48:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:57.771 { 00:21:57.771 "bdev_name": "Malloc0", 00:21:57.771 "nbd_device": "/dev/nbd0" 00:21:57.771 }, 00:21:57.771 { 00:21:57.771 "bdev_name": "Malloc1", 00:21:57.771 "nbd_device": "/dev/nbd1" 00:21:57.771 } 00:21:57.771 ]' 00:21:57.771 00:48:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:57.771 00:48:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:57.771 { 00:21:57.771 "bdev_name": "Malloc0", 00:21:57.771 "nbd_device": "/dev/nbd0" 00:21:57.771 }, 00:21:57.771 { 00:21:57.771 "bdev_name": "Malloc1", 00:21:57.771 "nbd_device": "/dev/nbd1" 00:21:57.771 } 00:21:57.771 ]' 00:21:58.030 00:48:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:58.030 /dev/nbd1' 00:21:58.030 00:48:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:58.030 /dev/nbd1' 00:21:58.030 00:48:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:58.030 00:48:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:21:58.030 00:48:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:21:58.030 00:48:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:21:58.030 00:48:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:21:58.030 00:48:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:21:58.030 00:48:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:58.030 00:48:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:58.030 00:48:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:58.030 00:48:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:58.030 00:48:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:21:58.031 256+0 records in 00:21:58.031 256+0 records out 00:21:58.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00698606 s, 150 MB/s 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:58.031 256+0 records in 00:21:58.031 256+0 records out 00:21:58.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257366 s, 40.7 MB/s 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:58.031 256+0 records in 00:21:58.031 256+0 records out 00:21:58.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266996 s, 39.3 MB/s 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:58.031 00:48:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:58.289 00:48:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:58.289 00:48:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:58.289 00:48:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:58.289 00:48:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:58.289 00:48:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:58.289 00:48:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:58.289 00:48:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:58.289 00:48:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:58.289 00:48:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:58.289 00:48:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:58.547 00:48:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:58.547 00:48:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:58.547 00:48:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:58.547 00:48:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:58.547 00:48:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:58.547 00:48:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:58.547 00:48:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:58.547 00:48:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:58.547 00:48:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:58.547 00:48:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:58.547 00:48:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:58.805 00:48:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:58.805 00:48:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:58.805 00:48:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:59.063 00:48:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:59.063 00:48:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:21:59.063 00:48:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:59.063 00:48:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:21:59.063 00:48:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:21:59.063 00:48:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:21:59.063 00:48:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:21:59.063 00:48:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:59.063 00:48:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:21:59.063 00:48:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:21:59.321 00:48:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:21:59.321 [2024-05-15 00:48:02.578923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:59.579 [2024-05-15 00:48:02.660461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.579 [2024-05-15 00:48:02.660472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.579 [2024-05-15 00:48:02.713686] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:21:59.579 [2024-05-15 00:48:02.713770] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:22:02.861 00:48:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 74684 /var/tmp/spdk-nbd.sock 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 74684 ']' 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:02.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:22:02.861 00:48:05 event.app_repeat -- event/event.sh@39 -- # killprocess 74684 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 74684 ']' 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 74684 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 74684 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:02.861 killing process with pid 74684 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 74684' 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@966 -- # kill 74684 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@971 -- # wait 74684 00:22:02.861 spdk_app_start is called in Round 0. 00:22:02.861 Shutdown signal received, stop current app iteration 00:22:02.861 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 reinitialization... 00:22:02.861 spdk_app_start is called in Round 1. 00:22:02.861 Shutdown signal received, stop current app iteration 00:22:02.861 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 reinitialization... 00:22:02.861 spdk_app_start is called in Round 2. 00:22:02.861 Shutdown signal received, stop current app iteration 00:22:02.861 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 reinitialization... 00:22:02.861 spdk_app_start is called in Round 3. 00:22:02.861 Shutdown signal received, stop current app iteration 00:22:02.861 00:48:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:22:02.861 00:48:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:22:02.861 00:22:02.861 real 0m18.854s 00:22:02.861 user 0m42.352s 00:22:02.861 sys 0m3.213s 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:02.861 00:48:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:22:02.861 ************************************ 00:22:02.861 END TEST app_repeat 00:22:02.861 ************************************ 00:22:02.861 00:48:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:22:02.861 00:48:05 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:22:02.861 00:48:05 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:22:02.861 00:48:05 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:02.861 00:48:05 event -- common/autotest_common.sh@10 -- # set +x 00:22:02.861 ************************************ 00:22:02.861 START TEST cpu_locks 00:22:02.861 ************************************ 00:22:02.861 00:48:05 event.cpu_locks -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:22:02.861 * Looking for test storage... 00:22:02.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:22:02.861 00:48:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:22:02.861 00:48:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:22:02.861 00:48:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:22:02.861 00:48:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:22:02.861 00:48:06 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:22:02.862 00:48:06 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:02.862 00:48:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:02.862 ************************************ 00:22:02.862 START TEST default_locks 00:22:02.862 ************************************ 00:22:02.862 00:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:22:02.862 00:48:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75300 00:22:02.862 00:48:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75300 00:22:02.862 00:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 75300 ']' 00:22:02.862 00:48:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:02.862 00:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.862 00:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:02.862 00:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.862 00:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:02.862 00:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:22:02.862 [2024-05-15 00:48:06.117497] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:02.862 [2024-05-15 00:48:06.117633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75300 ] 00:22:03.121 [2024-05-15 00:48:06.250704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.121 [2024-05-15 00:48:06.348348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.082 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:04.082 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:22:04.082 00:48:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75300 00:22:04.082 00:48:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75300 00:22:04.082 00:48:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:22:04.341 00:48:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75300 00:22:04.341 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 75300 ']' 00:22:04.341 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 75300 00:22:04.341 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:22:04.341 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:04.341 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75300 00:22:04.341 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:04.341 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:04.341 killing process with pid 75300 00:22:04.341 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75300' 00:22:04.341 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 75300 00:22:04.341 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 75300 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75300 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 75300 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 75300 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 75300 ']' 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:04.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:22:04.909 ERROR: process (pid: 75300) is no longer running 00:22:04.909 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 843: kill: (75300) - No such process 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:22:04.909 00:22:04.909 real 0m1.865s 00:22:04.909 user 0m2.018s 00:22:04.909 sys 0m0.573s 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:04.909 ************************************ 00:22:04.909 END TEST default_locks 00:22:04.909 ************************************ 00:22:04.909 00:48:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:22:04.909 00:48:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:22:04.909 00:48:07 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:22:04.909 00:48:07 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:04.909 00:48:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:04.909 ************************************ 00:22:04.909 START TEST default_locks_via_rpc 00:22:04.909 ************************************ 00:22:04.909 00:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:22:04.909 00:48:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75359 00:22:04.909 00:48:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 75359 00:22:04.909 00:48:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:04.909 00:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 75359 ']' 00:22:04.909 00:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.909 00:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:04.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.909 00:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.909 00:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:04.909 00:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:04.909 [2024-05-15 00:48:08.043103] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:04.909 [2024-05-15 00:48:08.043228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75359 ] 00:22:04.909 [2024-05-15 00:48:08.183099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.168 [2024-05-15 00:48:08.285951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.735 00:48:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:05.735 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:22:05.735 00:48:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:22:05.735 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:05.735 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:05.735 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:05.735 00:48:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:22:05.735 00:48:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:22:05.735 00:48:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:22:05.735 00:48:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:22:05.735 00:48:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:22:05.735 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:05.735 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:05.994 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:05.994 00:48:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 75359 00:22:05.994 00:48:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 75359 00:22:05.994 00:48:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:22:06.251 00:48:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 75359 00:22:06.251 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 75359 ']' 00:22:06.251 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 75359 00:22:06.251 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:22:06.251 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:06.251 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75359 00:22:06.251 killing process with pid 75359 00:22:06.251 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:06.251 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:06.251 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75359' 00:22:06.251 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 75359 00:22:06.251 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 75359 00:22:06.817 ************************************ 00:22:06.817 END TEST default_locks_via_rpc 00:22:06.817 ************************************ 00:22:06.817 00:22:06.817 real 0m1.896s 00:22:06.817 user 0m2.051s 00:22:06.817 sys 0m0.541s 00:22:06.817 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:06.817 00:48:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:06.817 00:48:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:22:06.817 00:48:09 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:22:06.817 00:48:09 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:06.817 00:48:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:06.817 ************************************ 00:22:06.817 START TEST non_locking_app_on_locked_coremask 00:22:06.817 ************************************ 00:22:06.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.817 00:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:22:06.817 00:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75428 00:22:06.817 00:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:06.817 00:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75428 /var/tmp/spdk.sock 00:22:06.817 00:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 75428 ']' 00:22:06.817 00:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.817 00:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:06.817 00:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.817 00:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:06.817 00:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:06.817 [2024-05-15 00:48:09.981750] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:06.817 [2024-05-15 00:48:09.982112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75428 ] 00:22:07.076 [2024-05-15 00:48:10.116766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.076 [2024-05-15 00:48:10.214450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.010 00:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:08.010 00:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:22:08.010 00:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75456 00:22:08.010 00:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75456 /var/tmp/spdk2.sock 00:22:08.010 00:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:22:08.010 00:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 75456 ']' 00:22:08.010 00:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:08.010 00:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:08.010 00:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:08.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:08.010 00:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:08.010 00:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:08.010 [2024-05-15 00:48:11.124272] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:08.010 [2024-05-15 00:48:11.124387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75456 ] 00:22:08.010 [2024-05-15 00:48:11.268989] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:22:08.010 [2024-05-15 00:48:11.269040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.268 [2024-05-15 00:48:11.468826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.202 00:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:09.202 00:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:22:09.202 00:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75428 00:22:09.202 00:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:22:09.202 00:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75428 00:22:09.769 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75428 00:22:09.769 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 75428 ']' 00:22:09.769 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 75428 00:22:09.769 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:22:09.769 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:09.769 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75428 00:22:10.027 killing process with pid 75428 00:22:10.027 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:10.027 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:10.027 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75428' 00:22:10.027 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 75428 00:22:10.027 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 75428 00:22:10.594 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75456 00:22:10.594 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 75456 ']' 00:22:10.594 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 75456 00:22:10.594 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:22:10.594 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:10.594 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75456 00:22:10.594 killing process with pid 75456 00:22:10.594 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:10.594 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:10.594 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75456' 00:22:10.594 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 75456 00:22:10.594 00:48:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 75456 00:22:11.164 00:22:11.164 real 0m4.276s 00:22:11.164 user 0m4.876s 00:22:11.164 sys 0m1.191s 00:22:11.164 00:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:11.164 00:48:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:11.164 ************************************ 00:22:11.164 END TEST non_locking_app_on_locked_coremask 00:22:11.164 ************************************ 00:22:11.164 00:48:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:22:11.164 00:48:14 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:22:11.164 00:48:14 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:11.164 00:48:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:11.164 ************************************ 00:22:11.164 START TEST locking_app_on_unlocked_coremask 00:22:11.164 ************************************ 00:22:11.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.164 00:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:22:11.164 00:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75535 00:22:11.164 00:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 75535 /var/tmp/spdk.sock 00:22:11.164 00:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 75535 ']' 00:22:11.164 00:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:22:11.164 00:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.164 00:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:11.164 00:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.164 00:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:11.164 00:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:11.164 [2024-05-15 00:48:14.349127] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:11.164 [2024-05-15 00:48:14.349300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75535 ] 00:22:11.422 [2024-05-15 00:48:14.494363] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:22:11.422 [2024-05-15 00:48:14.494416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.422 [2024-05-15 00:48:14.590244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.989 00:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:11.989 00:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:22:11.989 00:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:22:11.989 00:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75563 00:22:11.989 00:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 75563 /var/tmp/spdk2.sock 00:22:11.989 00:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 75563 ']' 00:22:11.989 00:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:11.989 00:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:11.989 00:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:11.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:11.989 00:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:11.989 00:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:12.246 [2024-05-15 00:48:15.323191] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:12.246 [2024-05-15 00:48:15.324013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75563 ] 00:22:12.246 [2024-05-15 00:48:15.469059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.504 [2024-05-15 00:48:15.661883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.070 00:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:13.070 00:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:22:13.070 00:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 75563 00:22:13.070 00:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75563 00:22:13.070 00:48:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:22:14.004 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 75535 00:22:14.004 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 75535 ']' 00:22:14.004 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 75535 00:22:14.004 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:22:14.004 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:14.004 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75535 00:22:14.004 killing process with pid 75535 00:22:14.004 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:14.004 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:14.004 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75535' 00:22:14.004 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 75535 00:22:14.004 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 75535 00:22:14.570 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 75563 00:22:14.570 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 75563 ']' 00:22:14.570 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 75563 00:22:14.570 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:22:14.570 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:14.570 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75563 00:22:14.570 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:14.570 killing process with pid 75563 00:22:14.570 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:14.570 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75563' 00:22:14.570 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 75563 00:22:14.570 00:48:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 75563 00:22:15.138 00:22:15.138 real 0m3.906s 00:22:15.138 user 0m4.334s 00:22:15.138 sys 0m1.092s 00:22:15.138 00:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:15.138 00:48:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:15.138 ************************************ 00:22:15.138 END TEST locking_app_on_unlocked_coremask 00:22:15.138 ************************************ 00:22:15.138 00:48:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:22:15.138 00:48:18 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:22:15.138 00:48:18 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:15.138 00:48:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:15.138 ************************************ 00:22:15.138 START TEST locking_app_on_locked_coremask 00:22:15.138 ************************************ 00:22:15.138 00:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:22:15.138 00:48:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=75642 00:22:15.138 00:48:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 75642 /var/tmp/spdk.sock 00:22:15.138 00:48:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:15.138 00:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 75642 ']' 00:22:15.138 00:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.138 00:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:15.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.138 00:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.138 00:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:15.138 00:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:15.138 [2024-05-15 00:48:18.260624] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:15.138 [2024-05-15 00:48:18.260725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75642 ] 00:22:15.138 [2024-05-15 00:48:18.395907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.397 [2024-05-15 00:48:18.498282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=75670 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 75670 /var/tmp/spdk2.sock 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 75670 /var/tmp/spdk2.sock 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 75670 /var/tmp/spdk2.sock 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 75670 ']' 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:16.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:16.334 00:48:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:16.334 [2024-05-15 00:48:19.316320] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:16.334 [2024-05-15 00:48:19.316456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75670 ] 00:22:16.334 [2024-05-15 00:48:19.460987] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 75642 has claimed it. 00:22:16.334 [2024-05-15 00:48:19.461067] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:22:16.902 ERROR: process (pid: 75670) is no longer running 00:22:16.902 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 843: kill: (75670) - No such process 00:22:16.902 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:16.902 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:22:16.902 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:22:16.902 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:16.902 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:16.902 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:16.902 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 75642 00:22:16.902 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75642 00:22:16.902 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:22:17.160 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 75642 00:22:17.160 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 75642 ']' 00:22:17.160 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 75642 00:22:17.160 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:22:17.160 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:17.160 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75642 00:22:17.160 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:17.160 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:17.160 killing process with pid 75642 00:22:17.160 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75642' 00:22:17.160 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 75642 00:22:17.160 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 75642 00:22:17.728 00:22:17.728 real 0m2.567s 00:22:17.728 user 0m2.984s 00:22:17.728 sys 0m0.612s 00:22:17.728 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:17.728 00:48:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:17.728 ************************************ 00:22:17.728 END TEST locking_app_on_locked_coremask 00:22:17.728 ************************************ 00:22:17.728 00:48:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:22:17.728 00:48:20 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:22:17.728 00:48:20 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:17.728 00:48:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:17.728 ************************************ 00:22:17.728 START TEST locking_overlapped_coremask 00:22:17.728 ************************************ 00:22:17.728 00:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:22:17.728 00:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=75727 00:22:17.728 00:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 75727 /var/tmp/spdk.sock 00:22:17.728 00:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:22:17.728 00:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 75727 ']' 00:22:17.728 00:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.728 00:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:17.728 00:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.728 00:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:17.728 00:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:17.728 [2024-05-15 00:48:20.894809] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:17.728 [2024-05-15 00:48:20.894931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75727 ] 00:22:17.987 [2024-05-15 00:48:21.038725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:17.987 [2024-05-15 00:48:21.134460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.987 [2024-05-15 00:48:21.134629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.987 [2024-05-15 00:48:21.134630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=75757 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 75757 /var/tmp/spdk2.sock 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 75757 /var/tmp/spdk2.sock 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 75757 /var/tmp/spdk2.sock 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 75757 ']' 00:22:18.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:18.555 00:48:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:18.812 [2024-05-15 00:48:21.893955] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:18.812 [2024-05-15 00:48:21.894098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75757 ] 00:22:18.812 [2024-05-15 00:48:22.043483] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75727 has claimed it. 00:22:18.812 [2024-05-15 00:48:22.043561] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:22:19.376 ERROR: process (pid: 75757) is no longer running 00:22:19.377 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 843: kill: (75757) - No such process 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 75727 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 75727 ']' 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 75727 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75727 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75727' 00:22:19.377 killing process with pid 75727 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 75727 00:22:19.377 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 75727 00:22:19.942 00:22:19.942 real 0m2.174s 00:22:19.942 user 0m5.955s 00:22:19.942 sys 0m0.491s 00:22:19.942 00:48:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:19.942 00:48:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:19.942 ************************************ 00:22:19.942 END TEST locking_overlapped_coremask 00:22:19.942 ************************************ 00:22:19.942 00:48:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:22:19.942 00:48:23 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:22:19.942 00:48:23 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:19.942 00:48:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:19.942 ************************************ 00:22:19.942 START TEST locking_overlapped_coremask_via_rpc 00:22:19.942 ************************************ 00:22:19.942 00:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:22:19.942 00:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=75803 00:22:19.942 00:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 75803 /var/tmp/spdk.sock 00:22:19.942 00:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 75803 ']' 00:22:19.942 00:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.942 00:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:22:19.942 00:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:19.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.942 00:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.942 00:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:19.942 00:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:19.942 [2024-05-15 00:48:23.130209] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:19.942 [2024-05-15 00:48:23.130581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75803 ] 00:22:20.201 [2024-05-15 00:48:23.272699] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:22:20.201 [2024-05-15 00:48:23.272758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:20.201 [2024-05-15 00:48:23.374995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.201 [2024-05-15 00:48:23.375109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.201 [2024-05-15 00:48:23.375116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.134 00:48:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:21.134 00:48:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:22:21.134 00:48:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=75833 00:22:21.134 00:48:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:22:21.134 00:48:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 75833 /var/tmp/spdk2.sock 00:22:21.134 00:48:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 75833 ']' 00:22:21.134 00:48:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:21.134 00:48:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:21.134 00:48:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:21.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:21.134 00:48:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:21.134 00:48:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:21.134 [2024-05-15 00:48:24.207532] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:21.134 [2024-05-15 00:48:24.207681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75833 ] 00:22:21.134 [2024-05-15 00:48:24.355300] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:22:21.134 [2024-05-15 00:48:24.355351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:21.392 [2024-05-15 00:48:24.546817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.392 [2024-05-15 00:48:24.546938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:21.392 [2024-05-15 00:48:24.546940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.959 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:21.959 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:22:21.959 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:22:21.959 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:21.959 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:21.959 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:21.959 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:22:21.959 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:22:21.959 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:22:21.959 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:22:21.959 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:21.960 [2024-05-15 00:48:25.206789] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75803 has claimed it. 00:22:21.960 2024/05/15 00:48:25 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:22:21.960 request: 00:22:21.960 { 00:22:21.960 "method": "framework_enable_cpumask_locks", 00:22:21.960 "params": {} 00:22:21.960 } 00:22:21.960 Got JSON-RPC error response 00:22:21.960 GoRPCClient: error on JSON-RPC call 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 75803 /var/tmp/spdk.sock 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 75803 ']' 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:21.960 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:22.218 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:22.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:22.218 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:22:22.218 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 75833 /var/tmp/spdk2.sock 00:22:22.218 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 75833 ']' 00:22:22.218 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:22.218 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:22.218 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:22.218 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:22.218 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:22.477 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:22.477 ************************************ 00:22:22.477 END TEST locking_overlapped_coremask_via_rpc 00:22:22.477 ************************************ 00:22:22.477 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:22:22.477 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:22:22.477 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:22:22.477 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:22:22.477 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:22:22.477 00:22:22.477 real 0m2.703s 00:22:22.477 user 0m1.375s 00:22:22.477 sys 0m0.262s 00:22:22.477 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:22.477 00:48:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:22.735 00:48:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:22:22.735 00:48:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75803 ]] 00:22:22.735 00:48:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75803 00:22:22.735 00:48:25 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 75803 ']' 00:22:22.735 00:48:25 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 75803 00:22:22.735 00:48:25 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:22:22.735 00:48:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:22.735 00:48:25 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75803 00:22:22.735 killing process with pid 75803 00:22:22.735 00:48:25 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:22.736 00:48:25 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:22.736 00:48:25 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75803' 00:22:22.736 00:48:25 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 75803 00:22:22.736 00:48:25 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 75803 00:22:22.993 00:48:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75833 ]] 00:22:22.993 00:48:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75833 00:22:22.993 00:48:26 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 75833 ']' 00:22:22.993 00:48:26 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 75833 00:22:22.993 00:48:26 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:22:22.993 00:48:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:22.993 00:48:26 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75833 00:22:22.993 killing process with pid 75833 00:22:22.993 00:48:26 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:22.993 00:48:26 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:22.993 00:48:26 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75833' 00:22:22.993 00:48:26 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 75833 00:22:22.993 00:48:26 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 75833 00:22:23.559 00:48:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:22:23.559 00:48:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:22:23.559 00:48:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75803 ]] 00:22:23.559 00:48:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75803 00:22:23.559 Process with pid 75803 is not found 00:22:23.559 Process with pid 75833 is not found 00:22:23.559 00:48:26 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 75803 ']' 00:22:23.559 00:48:26 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 75803 00:22:23.559 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (75803) - No such process 00:22:23.559 00:48:26 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 75803 is not found' 00:22:23.559 00:48:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75833 ]] 00:22:23.559 00:48:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75833 00:22:23.559 00:48:26 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 75833 ']' 00:22:23.559 00:48:26 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 75833 00:22:23.559 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (75833) - No such process 00:22:23.559 00:48:26 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 75833 is not found' 00:22:23.559 00:48:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:22:23.559 00:22:23.559 real 0m20.680s 00:22:23.559 user 0m36.160s 00:22:23.559 sys 0m5.681s 00:22:23.559 00:48:26 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:23.559 00:48:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:23.559 ************************************ 00:22:23.559 END TEST cpu_locks 00:22:23.559 ************************************ 00:22:23.559 00:22:23.559 real 0m48.638s 00:22:23.559 user 1m34.031s 00:22:23.559 sys 0m9.743s 00:22:23.559 00:48:26 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:23.559 00:48:26 event -- common/autotest_common.sh@10 -- # set +x 00:22:23.559 ************************************ 00:22:23.559 END TEST event 00:22:23.559 ************************************ 00:22:23.559 00:48:26 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:22:23.559 00:48:26 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:22:23.559 00:48:26 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:23.559 00:48:26 -- common/autotest_common.sh@10 -- # set +x 00:22:23.559 ************************************ 00:22:23.559 START TEST thread 00:22:23.559 ************************************ 00:22:23.559 00:48:26 thread -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:22:23.559 * Looking for test storage... 00:22:23.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:22:23.559 00:48:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:22:23.559 00:48:26 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:22:23.559 00:48:26 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:23.559 00:48:26 thread -- common/autotest_common.sh@10 -- # set +x 00:22:23.559 ************************************ 00:22:23.559 START TEST thread_poller_perf 00:22:23.559 ************************************ 00:22:23.559 00:48:26 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:22:23.559 [2024-05-15 00:48:26.841034] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:23.559 [2024-05-15 00:48:26.841158] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75980 ] 00:22:23.818 [2024-05-15 00:48:26.978023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.818 [2024-05-15 00:48:27.077464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.818 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:22:25.195 ====================================== 00:22:25.195 busy:2209459029 (cyc) 00:22:25.195 total_run_count: 320000 00:22:25.195 tsc_hz: 2200000000 (cyc) 00:22:25.195 ====================================== 00:22:25.195 poller_cost: 6904 (cyc), 3138 (nsec) 00:22:25.195 00:22:25.195 real 0m1.329s 00:22:25.195 user 0m1.173s 00:22:25.195 sys 0m0.049s 00:22:25.195 00:48:28 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:25.195 00:48:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:22:25.195 ************************************ 00:22:25.195 END TEST thread_poller_perf 00:22:25.195 ************************************ 00:22:25.195 00:48:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:22:25.195 00:48:28 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:22:25.195 00:48:28 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:25.195 00:48:28 thread -- common/autotest_common.sh@10 -- # set +x 00:22:25.195 ************************************ 00:22:25.195 START TEST thread_poller_perf 00:22:25.195 ************************************ 00:22:25.195 00:48:28 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:22:25.195 [2024-05-15 00:48:28.234403] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:25.195 [2024-05-15 00:48:28.234515] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76015 ] 00:22:25.195 [2024-05-15 00:48:28.374263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.195 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:22:25.195 [2024-05-15 00:48:28.466346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.571 ====================================== 00:22:26.571 busy:2202167745 (cyc) 00:22:26.571 total_run_count: 4169000 00:22:26.571 tsc_hz: 2200000000 (cyc) 00:22:26.571 ====================================== 00:22:26.571 poller_cost: 528 (cyc), 240 (nsec) 00:22:26.571 00:22:26.571 real 0m1.319s 00:22:26.571 user 0m1.153s 00:22:26.571 sys 0m0.059s 00:22:26.571 00:48:29 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:26.571 00:48:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:22:26.571 ************************************ 00:22:26.571 END TEST thread_poller_perf 00:22:26.571 ************************************ 00:22:26.571 00:48:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:22:26.571 ************************************ 00:22:26.571 END TEST thread 00:22:26.571 ************************************ 00:22:26.571 00:22:26.571 real 0m2.851s 00:22:26.571 user 0m2.397s 00:22:26.571 sys 0m0.233s 00:22:26.571 00:48:29 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:26.571 00:48:29 thread -- common/autotest_common.sh@10 -- # set +x 00:22:26.571 00:48:29 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:22:26.571 00:48:29 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:22:26.571 00:48:29 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:26.571 00:48:29 -- common/autotest_common.sh@10 -- # set +x 00:22:26.572 ************************************ 00:22:26.572 START TEST accel 00:22:26.572 ************************************ 00:22:26.572 00:48:29 accel -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:22:26.572 * Looking for test storage... 00:22:26.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:22:26.572 00:48:29 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:22:26.572 00:48:29 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:22:26.572 00:48:29 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:22:26.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.572 00:48:29 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=76090 00:22:26.572 00:48:29 accel -- accel/accel.sh@63 -- # waitforlisten 76090 00:22:26.572 00:48:29 accel -- common/autotest_common.sh@828 -- # '[' -z 76090 ']' 00:22:26.572 00:48:29 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.572 00:48:29 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:26.572 00:48:29 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.572 00:48:29 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:22:26.572 00:48:29 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:26.572 00:48:29 accel -- accel/accel.sh@61 -- # build_accel_config 00:22:26.572 00:48:29 accel -- common/autotest_common.sh@10 -- # set +x 00:22:26.572 00:48:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:26.572 00:48:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:26.572 00:48:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:26.572 00:48:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:26.572 00:48:29 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:26.572 00:48:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:22:26.572 00:48:29 accel -- accel/accel.sh@41 -- # jq -r . 00:22:26.572 [2024-05-15 00:48:29.777402] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:26.572 [2024-05-15 00:48:29.777756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76090 ] 00:22:26.831 [2024-05-15 00:48:29.911234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.831 [2024-05-15 00:48:30.004211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@861 -- # return 0 00:22:27.768 00:48:30 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:22:27.768 00:48:30 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:22:27.768 00:48:30 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:22:27.768 00:48:30 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:22:27.768 00:48:30 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:22:27.768 00:48:30 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.768 00:48:30 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@10 -- # set +x 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # IFS== 00:22:27.768 00:48:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:22:27.768 00:48:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:22:27.768 00:48:30 accel -- accel/accel.sh@75 -- # killprocess 76090 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@947 -- # '[' -z 76090 ']' 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@951 -- # kill -0 76090 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@952 -- # uname 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 76090 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 76090' 00:22:27.768 killing process with pid 76090 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@966 -- # kill 76090 00:22:27.768 00:48:30 accel -- common/autotest_common.sh@971 -- # wait 76090 00:22:28.027 00:48:31 accel -- accel/accel.sh@76 -- # trap - ERR 00:22:28.027 00:48:31 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:22:28.027 00:48:31 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:28.027 00:48:31 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:28.027 00:48:31 accel -- common/autotest_common.sh@10 -- # set +x 00:22:28.027 00:48:31 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:22:28.027 00:48:31 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:22:28.027 00:48:31 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:22:28.027 00:48:31 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:28.027 00:48:31 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:28.027 00:48:31 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:28.027 00:48:31 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:28.027 00:48:31 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:28.027 00:48:31 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:22:28.027 00:48:31 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:22:28.027 00:48:31 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:28.027 00:48:31 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:22:28.027 00:48:31 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:22:28.027 00:48:31 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:22:28.027 00:48:31 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:28.027 00:48:31 accel -- common/autotest_common.sh@10 -- # set +x 00:22:28.027 ************************************ 00:22:28.027 START TEST accel_missing_filename 00:22:28.027 ************************************ 00:22:28.027 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:22:28.027 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:22:28.027 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:22:28.027 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:22:28.027 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:28.027 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:22:28.027 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:28.027 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:22:28.027 00:48:31 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:22:28.027 00:48:31 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:22:28.291 00:48:31 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:28.291 00:48:31 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:28.291 00:48:31 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:28.291 00:48:31 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:28.291 00:48:31 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:28.291 00:48:31 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:22:28.291 00:48:31 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:22:28.291 [2024-05-15 00:48:31.333807] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:28.291 [2024-05-15 00:48:31.333913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76159 ] 00:22:28.291 [2024-05-15 00:48:31.479926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.291 [2024-05-15 00:48:31.573139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.555 [2024-05-15 00:48:31.634285] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:28.555 [2024-05-15 00:48:31.713510] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:22:28.555 A filename is required. 00:22:28.555 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:22:28.555 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:28.555 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:22:28.555 ************************************ 00:22:28.555 END TEST accel_missing_filename 00:22:28.555 ************************************ 00:22:28.555 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:22:28.555 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:22:28.555 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:28.555 00:22:28.555 real 0m0.474s 00:22:28.555 user 0m0.296s 00:22:28.555 sys 0m0.126s 00:22:28.555 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:28.555 00:48:31 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:22:28.555 00:48:31 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:22:28.555 00:48:31 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:22:28.555 00:48:31 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:28.555 00:48:31 accel -- common/autotest_common.sh@10 -- # set +x 00:22:28.555 ************************************ 00:22:28.555 START TEST accel_compress_verify 00:22:28.555 ************************************ 00:22:28.555 00:48:31 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:22:28.555 00:48:31 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:22:28.555 00:48:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:22:28.555 00:48:31 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:22:28.555 00:48:31 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:28.555 00:48:31 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:22:28.555 00:48:31 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:28.555 00:48:31 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:22:28.555 00:48:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:22:28.555 00:48:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:22:28.555 00:48:31 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:28.555 00:48:31 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:28.555 00:48:31 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:28.555 00:48:31 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:28.555 00:48:31 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:28.555 00:48:31 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:22:28.555 00:48:31 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:22:28.814 [2024-05-15 00:48:31.860297] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:28.814 [2024-05-15 00:48:31.860396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76184 ] 00:22:28.814 [2024-05-15 00:48:31.995684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.073 [2024-05-15 00:48:32.124919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.073 [2024-05-15 00:48:32.201085] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:29.073 [2024-05-15 00:48:32.310385] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:22:29.331 00:22:29.331 Compression does not support the verify option, aborting. 00:22:29.331 00:48:32 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:22:29.331 00:48:32 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:29.331 00:48:32 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:22:29.331 00:48:32 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:22:29.332 00:48:32 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:22:29.332 00:48:32 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:29.332 00:22:29.332 real 0m0.582s 00:22:29.332 user 0m0.369s 00:22:29.332 sys 0m0.151s 00:22:29.332 00:48:32 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:29.332 ************************************ 00:22:29.332 END TEST accel_compress_verify 00:22:29.332 ************************************ 00:22:29.332 00:48:32 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:22:29.332 00:48:32 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:22:29.332 00:48:32 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:22:29.332 00:48:32 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:29.332 00:48:32 accel -- common/autotest_common.sh@10 -- # set +x 00:22:29.332 ************************************ 00:22:29.332 START TEST accel_wrong_workload 00:22:29.332 ************************************ 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:22:29.332 00:48:32 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:22:29.332 00:48:32 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:22:29.332 00:48:32 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:29.332 00:48:32 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:29.332 00:48:32 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:29.332 00:48:32 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:29.332 00:48:32 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:29.332 00:48:32 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:22:29.332 00:48:32 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:22:29.332 Unsupported workload type: foobar 00:22:29.332 [2024-05-15 00:48:32.497119] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:22:29.332 accel_perf options: 00:22:29.332 [-h help message] 00:22:29.332 [-q queue depth per core] 00:22:29.332 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:22:29.332 [-T number of threads per core 00:22:29.332 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:22:29.332 [-t time in seconds] 00:22:29.332 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:22:29.332 [ dif_verify, , dif_generate, dif_generate_copy 00:22:29.332 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:22:29.332 [-l for compress/decompress workloads, name of uncompressed input file 00:22:29.332 [-S for crc32c workload, use this seed value (default 0) 00:22:29.332 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:22:29.332 [-f for fill workload, use this BYTE value (default 255) 00:22:29.332 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:22:29.332 [-y verify result if this switch is on] 00:22:29.332 [-a tasks to allocate per core (default: same value as -q)] 00:22:29.332 Can be used to spread operations across a wider range of memory. 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:29.332 00:22:29.332 real 0m0.031s 00:22:29.332 user 0m0.017s 00:22:29.332 sys 0m0.014s 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:29.332 00:48:32 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:22:29.332 ************************************ 00:22:29.332 END TEST accel_wrong_workload 00:22:29.332 ************************************ 00:22:29.332 00:48:32 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:22:29.332 00:48:32 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:22:29.332 00:48:32 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:29.332 00:48:32 accel -- common/autotest_common.sh@10 -- # set +x 00:22:29.332 ************************************ 00:22:29.332 START TEST accel_negative_buffers 00:22:29.332 ************************************ 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:22:29.332 00:48:32 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:22:29.332 00:48:32 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:22:29.332 00:48:32 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:29.332 00:48:32 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:29.332 00:48:32 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:29.332 00:48:32 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:29.332 00:48:32 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:29.332 00:48:32 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:22:29.332 00:48:32 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:22:29.332 -x option must be non-negative. 00:22:29.332 [2024-05-15 00:48:32.585333] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:22:29.332 accel_perf options: 00:22:29.332 [-h help message] 00:22:29.332 [-q queue depth per core] 00:22:29.332 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:22:29.332 [-T number of threads per core 00:22:29.332 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:22:29.332 [-t time in seconds] 00:22:29.332 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:22:29.332 [ dif_verify, , dif_generate, dif_generate_copy 00:22:29.332 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:22:29.332 [-l for compress/decompress workloads, name of uncompressed input file 00:22:29.332 [-S for crc32c workload, use this seed value (default 0) 00:22:29.332 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:22:29.332 [-f for fill workload, use this BYTE value (default 255) 00:22:29.332 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:22:29.332 [-y verify result if this switch is on] 00:22:29.332 [-a tasks to allocate per core (default: same value as -q)] 00:22:29.332 Can be used to spread operations across a wider range of memory. 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:29.332 00:22:29.332 real 0m0.035s 00:22:29.332 user 0m0.016s 00:22:29.332 sys 0m0.019s 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:29.332 ************************************ 00:22:29.332 END TEST accel_negative_buffers 00:22:29.332 ************************************ 00:22:29.332 00:48:32 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:22:29.591 00:48:32 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:22:29.591 00:48:32 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:22:29.591 00:48:32 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:29.591 00:48:32 accel -- common/autotest_common.sh@10 -- # set +x 00:22:29.591 ************************************ 00:22:29.591 START TEST accel_crc32c 00:22:29.591 ************************************ 00:22:29.591 00:48:32 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:22:29.591 00:48:32 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:22:29.591 00:48:32 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:22:29.591 00:48:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.591 00:48:32 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:22:29.591 00:48:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.591 00:48:32 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:22:29.591 00:48:32 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:22:29.591 00:48:32 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:29.591 00:48:32 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:29.591 00:48:32 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:29.591 00:48:32 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:29.592 00:48:32 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:29.592 00:48:32 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:22:29.592 00:48:32 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:22:29.592 [2024-05-15 00:48:32.666826] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:29.592 [2024-05-15 00:48:32.666954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76243 ] 00:22:29.592 [2024-05-15 00:48:32.808288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.851 [2024-05-15 00:48:32.936232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.851 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:29.852 00:48:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:22:31.230 00:48:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:31.230 00:22:31.230 real 0m1.525s 00:22:31.230 user 0m1.284s 00:22:31.230 sys 0m0.149s 00:22:31.230 00:48:34 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:31.230 00:48:34 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:22:31.230 ************************************ 00:22:31.230 END TEST accel_crc32c 00:22:31.230 ************************************ 00:22:31.230 00:48:34 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:22:31.230 00:48:34 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:22:31.230 00:48:34 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:31.230 00:48:34 accel -- common/autotest_common.sh@10 -- # set +x 00:22:31.230 ************************************ 00:22:31.230 START TEST accel_crc32c_C2 00:22:31.230 ************************************ 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:22:31.230 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:22:31.230 [2024-05-15 00:48:34.250805] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:31.230 [2024-05-15 00:48:34.250924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76277 ] 00:22:31.230 [2024-05-15 00:48:34.391700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.230 [2024-05-15 00:48:34.481493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.489 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:31.490 00:48:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:32.426 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:32.427 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:32.427 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:32.427 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:32.427 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:22:32.427 00:48:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:32.427 00:22:32.427 real 0m1.469s 00:22:32.427 user 0m1.254s 00:22:32.427 sys 0m0.122s 00:22:32.427 00:48:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:32.427 00:48:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:22:32.427 ************************************ 00:22:32.427 END TEST accel_crc32c_C2 00:22:32.427 ************************************ 00:22:32.686 00:48:35 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:22:32.686 00:48:35 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:22:32.686 00:48:35 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:32.686 00:48:35 accel -- common/autotest_common.sh@10 -- # set +x 00:22:32.686 ************************************ 00:22:32.686 START TEST accel_copy 00:22:32.686 ************************************ 00:22:32.686 00:48:35 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:22:32.686 00:48:35 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:22:32.686 [2024-05-15 00:48:35.768300] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:32.686 [2024-05-15 00:48:35.768389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76317 ] 00:22:32.686 [2024-05-15 00:48:35.910848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.945 [2024-05-15 00:48:36.032710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.945 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:22:32.945 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.945 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.945 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.945 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:32.946 00:48:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 ************************************ 00:22:34.346 END TEST accel_copy 00:22:34.346 ************************************ 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:22:34.346 00:48:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:34.346 00:22:34.346 real 0m1.509s 00:22:34.346 user 0m1.285s 00:22:34.346 sys 0m0.129s 00:22:34.346 00:48:37 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:34.346 00:48:37 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:22:34.346 00:48:37 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:22:34.346 00:48:37 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:22:34.346 00:48:37 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:34.346 00:48:37 accel -- common/autotest_common.sh@10 -- # set +x 00:22:34.346 ************************************ 00:22:34.346 START TEST accel_fill 00:22:34.346 ************************************ 00:22:34.346 00:48:37 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:22:34.346 [2024-05-15 00:48:37.330845] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:34.346 [2024-05-15 00:48:37.330959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76346 ] 00:22:34.346 [2024-05-15 00:48:37.469292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.346 [2024-05-15 00:48:37.562294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.346 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:34.605 00:48:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:22:35.542 ************************************ 00:22:35.542 END TEST accel_fill 00:22:35.542 ************************************ 00:22:35.542 00:48:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:35.542 00:22:35.542 real 0m1.468s 00:22:35.542 user 0m1.257s 00:22:35.542 sys 0m0.119s 00:22:35.542 00:48:38 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:35.542 00:48:38 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:22:35.542 00:48:38 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:22:35.542 00:48:38 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:22:35.542 00:48:38 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:35.542 00:48:38 accel -- common/autotest_common.sh@10 -- # set +x 00:22:35.542 ************************************ 00:22:35.542 START TEST accel_copy_crc32c 00:22:35.542 ************************************ 00:22:35.542 00:48:38 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:22:35.542 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:22:35.542 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:22:35.542 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:35.542 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:35.542 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:22:35.542 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:22:35.542 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:22:35.802 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:35.802 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:35.802 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:35.802 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:35.802 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:35.802 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:22:35.802 00:48:38 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:22:35.802 [2024-05-15 00:48:38.849994] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:35.802 [2024-05-15 00:48:38.850107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76386 ] 00:22:35.802 [2024-05-15 00:48:38.987867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.802 [2024-05-15 00:48:39.083850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.061 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.062 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:36.062 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:22:36.062 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:36.062 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:36.062 00:48:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:37.439 00:22:37.439 real 0m1.476s 00:22:37.439 user 0m1.262s 00:22:37.439 sys 0m0.119s 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:37.439 ************************************ 00:22:37.439 END TEST accel_copy_crc32c 00:22:37.439 ************************************ 00:22:37.439 00:48:40 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:22:37.439 00:48:40 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:22:37.439 00:48:40 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:22:37.439 00:48:40 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:37.439 00:48:40 accel -- common/autotest_common.sh@10 -- # set +x 00:22:37.439 ************************************ 00:22:37.439 START TEST accel_copy_crc32c_C2 00:22:37.439 ************************************ 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:22:37.439 [2024-05-15 00:48:40.380181] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:37.439 [2024-05-15 00:48:40.380295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76415 ] 00:22:37.439 [2024-05-15 00:48:40.515809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.439 [2024-05-15 00:48:40.613907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:37.439 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:37.440 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:37.440 00:48:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:38.818 00:22:38.818 real 0m1.471s 00:22:38.818 user 0m1.271s 00:22:38.818 sys 0m0.106s 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:38.818 00:48:41 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.818 ************************************ 00:22:38.818 END TEST accel_copy_crc32c_C2 00:22:38.818 ************************************ 00:22:38.818 00:48:41 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:22:38.818 00:48:41 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:22:38.818 00:48:41 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:38.818 00:48:41 accel -- common/autotest_common.sh@10 -- # set +x 00:22:38.818 ************************************ 00:22:38.818 START TEST accel_dualcast 00:22:38.818 ************************************ 00:22:38.818 00:48:41 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:22:38.818 00:48:41 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:22:38.818 00:48:41 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:22:38.818 00:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:38.818 00:48:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:38.819 00:48:41 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:22:38.819 00:48:41 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:22:38.819 00:48:41 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:22:38.819 00:48:41 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:38.819 00:48:41 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:38.819 00:48:41 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:38.819 00:48:41 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:38.819 00:48:41 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:38.819 00:48:41 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:22:38.819 00:48:41 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:22:38.819 [2024-05-15 00:48:41.899309] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:38.819 [2024-05-15 00:48:41.899401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76455 ] 00:22:38.819 [2024-05-15 00:48:42.040449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.096 [2024-05-15 00:48:42.133309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.096 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.097 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:22:39.097 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.097 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.097 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:39.097 00:48:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:22:39.097 00:48:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:39.097 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:39.097 00:48:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:40.057 00:48:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:22:40.057 00:48:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:40.057 00:48:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:40.057 00:48:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:40.057 00:48:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:22:40.057 00:48:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:40.057 00:48:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:40.057 00:48:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:40.057 00:48:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:22:40.057 00:48:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:40.057 00:48:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:40.058 00:48:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:40.058 00:48:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:22:40.058 00:48:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:40.058 00:48:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:40.058 00:48:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:40.322 00:48:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:22:40.322 00:48:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:40.322 00:48:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:40.322 ************************************ 00:22:40.322 END TEST accel_dualcast 00:22:40.322 ************************************ 00:22:40.322 00:48:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:40.322 00:48:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:22:40.322 00:48:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:22:40.322 00:48:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:22:40.322 00:48:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:22:40.322 00:48:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:40.322 00:48:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:22:40.322 00:48:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:40.322 00:22:40.322 real 0m1.471s 00:22:40.322 user 0m1.262s 00:22:40.322 sys 0m0.110s 00:22:40.322 00:48:43 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:40.322 00:48:43 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:22:40.322 00:48:43 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:22:40.322 00:48:43 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:22:40.322 00:48:43 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:40.322 00:48:43 accel -- common/autotest_common.sh@10 -- # set +x 00:22:40.322 ************************************ 00:22:40.322 START TEST accel_compare 00:22:40.322 ************************************ 00:22:40.322 00:48:43 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:22:40.322 00:48:43 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:22:40.322 [2024-05-15 00:48:43.433629] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:40.322 [2024-05-15 00:48:43.433805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76484 ] 00:22:40.322 [2024-05-15 00:48:43.583141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.581 [2024-05-15 00:48:43.675099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:40.581 00:48:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:22:41.961 ************************************ 00:22:41.961 END TEST accel_compare 00:22:41.961 ************************************ 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:22:41.961 00:48:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:41.961 00:22:41.961 real 0m1.489s 00:22:41.961 user 0m1.272s 00:22:41.961 sys 0m0.124s 00:22:41.961 00:48:44 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:41.961 00:48:44 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:22:41.961 00:48:44 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:22:41.961 00:48:44 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:22:41.961 00:48:44 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:41.961 00:48:44 accel -- common/autotest_common.sh@10 -- # set +x 00:22:41.961 ************************************ 00:22:41.961 START TEST accel_xor 00:22:41.961 ************************************ 00:22:41.961 00:48:44 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:22:41.961 00:48:44 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:22:41.961 [2024-05-15 00:48:44.970939] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:41.961 [2024-05-15 00:48:44.971088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76524 ] 00:22:41.961 [2024-05-15 00:48:45.115002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.961 [2024-05-15 00:48:45.211986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:22:42.220 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:42.221 00:48:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:22:43.158 00:48:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:43.158 00:22:43.158 real 0m1.483s 00:22:43.158 user 0m1.272s 00:22:43.158 sys 0m0.114s 00:22:43.158 00:48:46 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:43.158 00:48:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:22:43.158 ************************************ 00:22:43.158 END TEST accel_xor 00:22:43.158 ************************************ 00:22:43.417 00:48:46 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:22:43.417 00:48:46 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:22:43.417 00:48:46 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:43.417 00:48:46 accel -- common/autotest_common.sh@10 -- # set +x 00:22:43.417 ************************************ 00:22:43.417 START TEST accel_xor 00:22:43.417 ************************************ 00:22:43.417 00:48:46 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:22:43.417 00:48:46 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:22:43.417 [2024-05-15 00:48:46.504687] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:43.417 [2024-05-15 00:48:46.504926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76553 ] 00:22:43.417 [2024-05-15 00:48:46.640157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.676 [2024-05-15 00:48:46.734966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.676 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:43.677 00:48:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:22:45.054 00:48:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:45.054 00:22:45.054 real 0m1.463s 00:22:45.054 user 0m1.261s 00:22:45.054 sys 0m0.110s 00:22:45.054 00:48:47 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:45.054 00:48:47 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:22:45.054 ************************************ 00:22:45.054 END TEST accel_xor 00:22:45.054 ************************************ 00:22:45.054 00:48:47 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:22:45.054 00:48:47 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:22:45.054 00:48:47 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:45.054 00:48:47 accel -- common/autotest_common.sh@10 -- # set +x 00:22:45.054 ************************************ 00:22:45.054 START TEST accel_dif_verify 00:22:45.054 ************************************ 00:22:45.054 00:48:47 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:22:45.054 00:48:47 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:22:45.054 00:48:47 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:22:45.054 00:48:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.054 00:48:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.054 00:48:47 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:22:45.054 00:48:47 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:22:45.054 00:48:47 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:22:45.054 [2024-05-15 00:48:48.020765] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:45.054 [2024-05-15 00:48:48.020862] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76592 ] 00:22:45.054 [2024-05-15 00:48:48.157534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.054 [2024-05-15 00:48:48.238657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.054 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:45.055 00:48:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:46.432 ************************************ 00:22:46.432 END TEST accel_dif_verify 00:22:46.432 ************************************ 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:22:46.432 00:48:49 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:46.432 00:22:46.432 real 0m1.455s 00:22:46.432 user 0m1.244s 00:22:46.432 sys 0m0.121s 00:22:46.432 00:48:49 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:46.432 00:48:49 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:22:46.432 00:48:49 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:22:46.432 00:48:49 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:22:46.432 00:48:49 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:46.432 00:48:49 accel -- common/autotest_common.sh@10 -- # set +x 00:22:46.432 ************************************ 00:22:46.432 START TEST accel_dif_generate 00:22:46.432 ************************************ 00:22:46.432 00:48:49 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:22:46.432 00:48:49 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:22:46.432 [2024-05-15 00:48:49.534016] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:46.432 [2024-05-15 00:48:49.534131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76622 ] 00:22:46.432 [2024-05-15 00:48:49.673979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.692 [2024-05-15 00:48:49.769274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:46.692 00:48:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:22:48.069 00:48:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:48.069 00:22:48.069 real 0m1.488s 00:22:48.069 user 0m1.268s 00:22:48.069 sys 0m0.123s 00:22:48.069 00:48:50 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:48.069 00:48:50 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:22:48.069 ************************************ 00:22:48.069 END TEST accel_dif_generate 00:22:48.069 ************************************ 00:22:48.069 00:48:51 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:22:48.069 00:48:51 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:22:48.069 00:48:51 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:48.069 00:48:51 accel -- common/autotest_common.sh@10 -- # set +x 00:22:48.069 ************************************ 00:22:48.069 START TEST accel_dif_generate_copy 00:22:48.069 ************************************ 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:22:48.069 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:22:48.069 [2024-05-15 00:48:51.072522] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:48.069 [2024-05-15 00:48:51.072644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76657 ] 00:22:48.069 [2024-05-15 00:48:51.213330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.069 [2024-05-15 00:48:51.313608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:48.329 00:48:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:22:49.266 ************************************ 00:22:49.266 END TEST accel_dif_generate_copy 00:22:49.266 ************************************ 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:49.266 00:22:49.266 real 0m1.482s 00:22:49.266 user 0m1.259s 00:22:49.266 sys 0m0.123s 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:49.266 00:48:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:22:49.524 00:48:52 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:22:49.524 00:48:52 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:49.524 00:48:52 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:22:49.524 00:48:52 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:49.524 00:48:52 accel -- common/autotest_common.sh@10 -- # set +x 00:22:49.524 ************************************ 00:22:49.524 START TEST accel_comp 00:22:49.524 ************************************ 00:22:49.524 00:48:52 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:22:49.524 00:48:52 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:22:49.524 [2024-05-15 00:48:52.617086] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:49.524 [2024-05-15 00:48:52.617228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76691 ] 00:22:49.524 [2024-05-15 00:48:52.758684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.783 [2024-05-15 00:48:52.854621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.783 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:49.784 00:48:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:51.164 ************************************ 00:22:51.164 END TEST accel_comp 00:22:51.164 ************************************ 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:22:51.164 00:48:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:51.164 00:22:51.164 real 0m1.476s 00:22:51.164 user 0m1.259s 00:22:51.164 sys 0m0.123s 00:22:51.164 00:48:54 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:51.164 00:48:54 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:22:51.164 00:48:54 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:22:51.164 00:48:54 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:22:51.164 00:48:54 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:51.164 00:48:54 accel -- common/autotest_common.sh@10 -- # set +x 00:22:51.164 ************************************ 00:22:51.164 START TEST accel_decomp 00:22:51.164 ************************************ 00:22:51.164 00:48:54 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:22:51.164 00:48:54 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:22:51.164 00:48:54 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:22:51.164 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.164 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.164 00:48:54 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:22:51.164 00:48:54 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:22:51.164 00:48:54 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:22:51.164 00:48:54 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:51.164 00:48:54 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:22:51.165 [2024-05-15 00:48:54.145522] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:51.165 [2024-05-15 00:48:54.145619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76726 ] 00:22:51.165 [2024-05-15 00:48:54.280955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.165 [2024-05-15 00:48:54.379373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.165 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:51.423 00:48:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:22:52.356 00:48:55 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:52.356 00:22:52.356 real 0m1.476s 00:22:52.356 user 0m1.261s 00:22:52.356 sys 0m0.120s 00:22:52.356 00:48:55 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:52.356 00:48:55 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:22:52.356 ************************************ 00:22:52.356 END TEST accel_decomp 00:22:52.356 ************************************ 00:22:52.356 00:48:55 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:22:52.356 00:48:55 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:22:52.356 00:48:55 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:52.356 00:48:55 accel -- common/autotest_common.sh@10 -- # set +x 00:22:52.614 ************************************ 00:22:52.614 START TEST accel_decmop_full 00:22:52.614 ************************************ 00:22:52.614 00:48:55 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:22:52.614 00:48:55 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:22:52.614 [2024-05-15 00:48:55.666838] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:52.614 [2024-05-15 00:48:55.666930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76760 ] 00:22:52.614 [2024-05-15 00:48:55.804887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.921 [2024-05-15 00:48:55.903666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:22:52.921 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:52.922 00:48:55 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:22:53.856 00:48:57 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:53.856 00:22:53.856 real 0m1.484s 00:22:53.856 user 0m1.278s 00:22:53.856 sys 0m0.117s 00:22:53.856 00:48:57 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:53.856 00:48:57 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:22:53.856 ************************************ 00:22:53.856 END TEST accel_decmop_full 00:22:53.856 ************************************ 00:22:54.114 00:48:57 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:22:54.114 00:48:57 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:22:54.114 00:48:57 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:54.114 00:48:57 accel -- common/autotest_common.sh@10 -- # set +x 00:22:54.114 ************************************ 00:22:54.114 START TEST accel_decomp_mcore 00:22:54.114 ************************************ 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:22:54.114 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:22:54.114 [2024-05-15 00:48:57.205871] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:54.114 [2024-05-15 00:48:57.206030] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76795 ] 00:22:54.114 [2024-05-15 00:48:57.348814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:54.383 [2024-05-15 00:48:57.444361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.383 [2024-05-15 00:48:57.444465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.383 [2024-05-15 00:48:57.444609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.383 [2024-05-15 00:48:57.444615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:54.383 00:48:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:55.759 ************************************ 00:22:55.759 END TEST accel_decomp_mcore 00:22:55.759 ************************************ 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:55.759 00:22:55.759 real 0m1.511s 00:22:55.759 user 0m4.675s 00:22:55.759 sys 0m0.136s 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:55.759 00:48:58 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:22:55.759 00:48:58 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:22:55.759 00:48:58 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:22:55.759 00:48:58 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:55.759 00:48:58 accel -- common/autotest_common.sh@10 -- # set +x 00:22:55.759 ************************************ 00:22:55.759 START TEST accel_decomp_full_mcore 00:22:55.759 ************************************ 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:22:55.759 00:48:58 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:22:55.759 [2024-05-15 00:48:58.761120] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:55.759 [2024-05-15 00:48:58.761241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76832 ] 00:22:55.759 [2024-05-15 00:48:58.902252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.759 [2024-05-15 00:48:58.999138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.759 [2024-05-15 00:48:58.999291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.759 [2024-05-15 00:48:58.999417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.759 [2024-05-15 00:48:58.999662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.018 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.019 00:48:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:56.955 ************************************ 00:22:56.955 END TEST accel_decomp_full_mcore 00:22:56.955 ************************************ 00:22:56.955 00:22:56.955 real 0m1.495s 00:22:56.955 user 0m4.690s 00:22:56.955 sys 0m0.133s 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:56.955 00:49:00 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:22:57.212 00:49:00 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:22:57.212 00:49:00 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:22:57.212 00:49:00 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:57.212 00:49:00 accel -- common/autotest_common.sh@10 -- # set +x 00:22:57.212 ************************************ 00:22:57.212 START TEST accel_decomp_mthread 00:22:57.212 ************************************ 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:22:57.212 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:22:57.212 [2024-05-15 00:49:00.299727] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:57.212 [2024-05-15 00:49:00.299806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76870 ] 00:22:57.212 [2024-05-15 00:49:00.438089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.470 [2024-05-15 00:49:00.541548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.470 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:57.471 00:49:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:58.861 00:22:58.861 real 0m1.491s 00:22:58.861 user 0m1.280s 00:22:58.861 sys 0m0.116s 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:58.861 00:49:01 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:22:58.861 ************************************ 00:22:58.861 END TEST accel_decomp_mthread 00:22:58.861 ************************************ 00:22:58.861 00:49:01 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:22:58.861 00:49:01 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:22:58.862 00:49:01 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:58.862 00:49:01 accel -- common/autotest_common.sh@10 -- # set +x 00:22:58.862 ************************************ 00:22:58.862 START TEST accel_decomp_full_mthread 00:22:58.862 ************************************ 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:22:58.862 00:49:01 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:22:58.862 [2024-05-15 00:49:01.841429] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:22:58.862 [2024-05-15 00:49:01.841557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76903 ] 00:22:58.862 [2024-05-15 00:49:01.981805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.862 [2024-05-15 00:49:02.081266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:22:59.122 00:49:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:23:00.059 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:23:00.059 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:00.060 00:23:00.060 real 0m1.520s 00:23:00.060 user 0m1.291s 00:23:00.060 sys 0m0.130s 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:00.060 00:49:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:23:00.060 ************************************ 00:23:00.060 END TEST accel_decomp_full_mthread 00:23:00.060 ************************************ 00:23:00.319 00:49:03 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:23:00.319 00:49:03 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:23:00.319 00:49:03 accel -- accel/accel.sh@137 -- # build_accel_config 00:23:00.319 00:49:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:23:00.319 00:49:03 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:23:00.319 00:49:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:23:00.319 00:49:03 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:00.319 00:49:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:23:00.319 00:49:03 accel -- common/autotest_common.sh@10 -- # set +x 00:23:00.319 00:49:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:23:00.319 00:49:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:23:00.319 00:49:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:23:00.319 00:49:03 accel -- accel/accel.sh@41 -- # jq -r . 00:23:00.319 ************************************ 00:23:00.319 START TEST accel_dif_functional_tests 00:23:00.319 ************************************ 00:23:00.319 00:49:03 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:23:00.319 [2024-05-15 00:49:03.438387] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:00.319 [2024-05-15 00:49:03.438463] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76940 ] 00:23:00.319 [2024-05-15 00:49:03.573196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:00.578 [2024-05-15 00:49:03.660002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.578 [2024-05-15 00:49:03.660147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.578 [2024-05-15 00:49:03.660150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.578 00:23:00.578 00:23:00.578 CUnit - A unit testing framework for C - Version 2.1-3 00:23:00.578 http://cunit.sourceforge.net/ 00:23:00.578 00:23:00.578 00:23:00.578 Suite: accel_dif 00:23:00.578 Test: verify: DIF generated, GUARD check ...passed 00:23:00.578 Test: verify: DIF generated, APPTAG check ...passed 00:23:00.578 Test: verify: DIF generated, REFTAG check ...passed 00:23:00.578 Test: verify: DIF not generated, GUARD check ...passed 00:23:00.578 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 00:49:03.765935] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:23:00.578 [2024-05-15 00:49:03.766056] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:23:00.578 [2024-05-15 00:49:03.766106] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:23:00.578 passed 00:23:00.578 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 00:49:03.766247] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:23:00.578 [2024-05-15 00:49:03.766295] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:23:00.578 passed 00:23:00.578 Test: verify: APPTAG correct, APPTAG check ...[2024-05-15 00:49:03.766322] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:23:00.578 passed 00:23:00.578 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:23:00.578 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:23:00.578 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-05-15 00:49:03.766525] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:23:00.578 passed 00:23:00.578 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:23:00.578 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 00:49:03.766982] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:23:00.579 passed 00:23:00.579 Test: generate copy: DIF generated, GUARD check ...passed 00:23:00.579 Test: generate copy: DIF generated, APTTAG check ...passed 00:23:00.579 Test: generate copy: DIF generated, REFTAG check ...passed 00:23:00.579 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:23:00.579 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:23:00.579 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:23:00.579 Test: generate copy: iovecs-len validate ...[2024-05-15 00:49:03.767873] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:23:00.579 passed 00:23:00.579 Test: generate copy: buffer alignment validate ...passed 00:23:00.579 00:23:00.579 Run Summary: Type Total Ran Passed Failed Inactive 00:23:00.579 suites 1 1 n/a 0 0 00:23:00.579 tests 20 20 20 0 0 00:23:00.579 asserts 204 204 204 0 n/a 00:23:00.579 00:23:00.579 Elapsed time = 0.007 seconds 00:23:00.837 00:23:00.837 real 0m0.575s 00:23:00.837 user 0m0.800s 00:23:00.837 sys 0m0.156s 00:23:00.837 ************************************ 00:23:00.837 END TEST accel_dif_functional_tests 00:23:00.837 ************************************ 00:23:00.837 00:49:03 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:00.837 00:49:03 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:23:00.837 ************************************ 00:23:00.837 END TEST accel 00:23:00.837 ************************************ 00:23:00.837 00:23:00.837 real 0m34.379s 00:23:00.837 user 0m35.931s 00:23:00.837 sys 0m4.149s 00:23:00.837 00:49:04 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:00.837 00:49:04 accel -- common/autotest_common.sh@10 -- # set +x 00:23:00.837 00:49:04 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:23:00.837 00:49:04 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:23:00.837 00:49:04 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:00.837 00:49:04 -- common/autotest_common.sh@10 -- # set +x 00:23:00.837 ************************************ 00:23:00.837 START TEST accel_rpc 00:23:00.837 ************************************ 00:23:00.837 00:49:04 accel_rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:23:01.097 * Looking for test storage... 00:23:01.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:23:01.097 00:49:04 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:23:01.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.097 00:49:04 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=77004 00:23:01.097 00:49:04 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:23:01.097 00:49:04 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 77004 00:23:01.097 00:49:04 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 77004 ']' 00:23:01.097 00:49:04 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.097 00:49:04 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:01.097 00:49:04 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.097 00:49:04 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:01.097 00:49:04 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:01.097 [2024-05-15 00:49:04.208639] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:01.097 [2024-05-15 00:49:04.208944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77004 ] 00:23:01.097 [2024-05-15 00:49:04.341761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.356 [2024-05-15 00:49:04.432547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.295 00:49:05 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:02.295 00:49:05 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:23:02.295 00:49:05 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:23:02.295 00:49:05 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:23:02.295 00:49:05 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:23:02.295 00:49:05 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:23:02.295 00:49:05 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:23:02.295 00:49:05 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:23:02.295 00:49:05 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:02.295 00:49:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:02.295 ************************************ 00:23:02.296 START TEST accel_assign_opcode 00:23:02.296 ************************************ 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:23:02.296 [2024-05-15 00:49:05.237494] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:23:02.296 [2024-05-15 00:49:05.245466] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.296 software 00:23:02.296 ************************************ 00:23:02.296 END TEST accel_assign_opcode 00:23:02.296 ************************************ 00:23:02.296 00:23:02.296 real 0m0.298s 00:23:02.296 user 0m0.054s 00:23:02.296 sys 0m0.009s 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:02.296 00:49:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:23:02.296 00:49:05 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 77004 00:23:02.296 00:49:05 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 77004 ']' 00:23:02.296 00:49:05 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 77004 00:23:02.296 00:49:05 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:23:02.296 00:49:05 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:02.296 00:49:05 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 77004 00:23:02.577 killing process with pid 77004 00:23:02.577 00:49:05 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:02.577 00:49:05 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:02.577 00:49:05 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 77004' 00:23:02.577 00:49:05 accel_rpc -- common/autotest_common.sh@966 -- # kill 77004 00:23:02.577 00:49:05 accel_rpc -- common/autotest_common.sh@971 -- # wait 77004 00:23:02.850 ************************************ 00:23:02.850 END TEST accel_rpc 00:23:02.850 ************************************ 00:23:02.850 00:23:02.850 real 0m1.919s 00:23:02.850 user 0m2.030s 00:23:02.850 sys 0m0.470s 00:23:02.850 00:49:05 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:02.850 00:49:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:02.850 00:49:06 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:23:02.850 00:49:06 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:23:02.850 00:49:06 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:02.850 00:49:06 -- common/autotest_common.sh@10 -- # set +x 00:23:02.850 ************************************ 00:23:02.850 START TEST app_cmdline 00:23:02.850 ************************************ 00:23:02.850 00:49:06 app_cmdline -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:23:02.850 * Looking for test storage... 00:23:02.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:23:02.850 00:49:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:23:02.850 00:49:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=77115 00:23:02.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.850 00:49:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 77115 00:23:02.850 00:49:06 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:23:02.850 00:49:06 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 77115 ']' 00:23:02.850 00:49:06 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.850 00:49:06 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:02.850 00:49:06 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.850 00:49:06 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:02.850 00:49:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:23:03.109 [2024-05-15 00:49:06.196142] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:03.110 [2024-05-15 00:49:06.196486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77115 ] 00:23:03.110 [2024-05-15 00:49:06.342414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.369 [2024-05-15 00:49:06.444723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.938 00:49:07 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:03.938 00:49:07 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:23:03.938 00:49:07 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:23:04.197 { 00:23:04.197 "fields": { 00:23:04.197 "commit": "4506c0c36", 00:23:04.197 "major": 24, 00:23:04.197 "minor": 5, 00:23:04.197 "patch": 0, 00:23:04.197 "suffix": "-pre" 00:23:04.197 }, 00:23:04.197 "version": "SPDK v24.05-pre git sha1 4506c0c36" 00:23:04.197 } 00:23:04.197 00:49:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:23:04.197 00:49:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:23:04.197 00:49:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:23:04.197 00:49:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:23:04.197 00:49:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:23:04.197 00:49:07 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:04.197 00:49:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:23:04.197 00:49:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:23:04.197 00:49:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:23:04.197 00:49:07 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:04.456 00:49:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:23:04.456 00:49:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:23:04.457 00:49:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:23:04.457 00:49:07 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:23:04.457 00:49:07 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:23:04.457 00:49:07 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:04.457 00:49:07 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:04.457 00:49:07 app_cmdline -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:04.457 00:49:07 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:04.457 00:49:07 app_cmdline -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:04.457 00:49:07 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:04.457 00:49:07 app_cmdline -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:04.457 00:49:07 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:04.457 00:49:07 app_cmdline -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:23:04.457 2024/05/15 00:49:07 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:23:04.457 request: 00:23:04.457 { 00:23:04.457 "method": "env_dpdk_get_mem_stats", 00:23:04.457 "params": {} 00:23:04.457 } 00:23:04.457 Got JSON-RPC error response 00:23:04.457 GoRPCClient: error on JSON-RPC call 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:04.716 00:49:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 77115 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 77115 ']' 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 77115 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 77115 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:04.716 killing process with pid 77115 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 77115' 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@966 -- # kill 77115 00:23:04.716 00:49:07 app_cmdline -- common/autotest_common.sh@971 -- # wait 77115 00:23:04.975 00:23:04.975 real 0m2.123s 00:23:04.975 user 0m2.619s 00:23:04.975 sys 0m0.526s 00:23:04.975 00:49:08 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:04.975 00:49:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:23:04.975 ************************************ 00:23:04.975 END TEST app_cmdline 00:23:04.975 ************************************ 00:23:04.975 00:49:08 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:23:04.975 00:49:08 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:23:04.975 00:49:08 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:04.975 00:49:08 -- common/autotest_common.sh@10 -- # set +x 00:23:04.975 ************************************ 00:23:04.975 START TEST version 00:23:04.975 ************************************ 00:23:04.975 00:49:08 version -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:23:05.235 * Looking for test storage... 00:23:05.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:23:05.236 00:49:08 version -- app/version.sh@17 -- # get_header_version major 00:23:05.236 00:49:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:23:05.236 00:49:08 version -- app/version.sh@14 -- # cut -f2 00:23:05.236 00:49:08 version -- app/version.sh@14 -- # tr -d '"' 00:23:05.236 00:49:08 version -- app/version.sh@17 -- # major=24 00:23:05.236 00:49:08 version -- app/version.sh@18 -- # get_header_version minor 00:23:05.236 00:49:08 version -- app/version.sh@14 -- # cut -f2 00:23:05.236 00:49:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:23:05.236 00:49:08 version -- app/version.sh@14 -- # tr -d '"' 00:23:05.236 00:49:08 version -- app/version.sh@18 -- # minor=5 00:23:05.236 00:49:08 version -- app/version.sh@19 -- # get_header_version patch 00:23:05.236 00:49:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:23:05.236 00:49:08 version -- app/version.sh@14 -- # cut -f2 00:23:05.236 00:49:08 version -- app/version.sh@14 -- # tr -d '"' 00:23:05.236 00:49:08 version -- app/version.sh@19 -- # patch=0 00:23:05.236 00:49:08 version -- app/version.sh@20 -- # get_header_version suffix 00:23:05.236 00:49:08 version -- app/version.sh@14 -- # cut -f2 00:23:05.236 00:49:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:23:05.236 00:49:08 version -- app/version.sh@14 -- # tr -d '"' 00:23:05.236 00:49:08 version -- app/version.sh@20 -- # suffix=-pre 00:23:05.236 00:49:08 version -- app/version.sh@22 -- # version=24.5 00:23:05.236 00:49:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:23:05.236 00:49:08 version -- app/version.sh@28 -- # version=24.5rc0 00:23:05.236 00:49:08 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:05.236 00:49:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:23:05.236 00:49:08 version -- app/version.sh@30 -- # py_version=24.5rc0 00:23:05.236 00:49:08 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:23:05.236 00:23:05.236 real 0m0.169s 00:23:05.236 user 0m0.101s 00:23:05.236 sys 0m0.099s 00:23:05.236 00:49:08 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:05.236 ************************************ 00:23:05.236 00:49:08 version -- common/autotest_common.sh@10 -- # set +x 00:23:05.236 END TEST version 00:23:05.236 ************************************ 00:23:05.236 00:49:08 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:23:05.236 00:49:08 -- spdk/autotest.sh@194 -- # uname -s 00:23:05.236 00:49:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:23:05.236 00:49:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:05.236 00:49:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:05.236 00:49:08 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:23:05.236 00:49:08 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:23:05.236 00:49:08 -- spdk/autotest.sh@256 -- # timing_exit lib 00:23:05.236 00:49:08 -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:05.236 00:49:08 -- common/autotest_common.sh@10 -- # set +x 00:23:05.236 00:49:08 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:23:05.236 00:49:08 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:23:05.236 00:49:08 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:23:05.236 00:49:08 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:23:05.236 00:49:08 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:23:05.236 00:49:08 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:23:05.236 00:49:08 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:23:05.236 00:49:08 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:05.236 00:49:08 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:05.236 00:49:08 -- common/autotest_common.sh@10 -- # set +x 00:23:05.236 ************************************ 00:23:05.236 START TEST nvmf_tcp 00:23:05.236 ************************************ 00:23:05.236 00:49:08 nvmf_tcp -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:23:05.496 * Looking for test storage... 00:23:05.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.496 00:49:08 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:05.497 00:49:08 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.497 00:49:08 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.497 00:49:08 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.497 00:49:08 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.497 00:49:08 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.497 00:49:08 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.497 00:49:08 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:23:05.497 00:49:08 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:23:05.497 00:49:08 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:05.497 00:49:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:23:05.497 00:49:08 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:23:05.497 00:49:08 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:05.497 00:49:08 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:05.497 00:49:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.497 ************************************ 00:23:05.497 START TEST nvmf_example 00:23:05.497 ************************************ 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:23:05.497 * Looking for test storage... 00:23:05.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.497 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:05.498 Cannot find device "nvmf_init_br" 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:05.498 Cannot find device "nvmf_tgt_br" 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:05.498 Cannot find device "nvmf_tgt_br2" 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:05.498 Cannot find device "nvmf_init_br" 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:05.498 Cannot find device "nvmf_tgt_br" 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:05.498 Cannot find device "nvmf_tgt_br2" 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:05.498 Cannot find device "nvmf_br" 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:23:05.498 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:05.757 Cannot find device "nvmf_init_if" 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:05.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:05.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:05.757 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:05.758 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:05.758 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:05.758 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:05.758 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:05.758 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:05.758 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:05.758 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:05.758 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:05.758 00:49:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:05.758 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:05.758 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:05.758 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:05.758 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:05.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:23:05.758 00:23:05.758 --- 10.0.0.2 ping statistics --- 00:23:05.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.758 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:23:05.758 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:05.758 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:05.758 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:23:05.758 00:23:05.758 --- 10.0.0.3 ping statistics --- 00:23:05.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.758 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:23:05.758 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:06.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:23:06.017 00:23:06.017 --- 10.0.0.1 ping statistics --- 00:23:06.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.017 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=77469 00:23:06.017 00:49:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:23:06.018 00:49:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.018 00:49:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 77469 00:23:06.018 00:49:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # '[' -z 77469 ']' 00:23:06.018 00:49:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.018 00:49:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:06.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.018 00:49:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.018 00:49:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:06.018 00:49:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:23:06.982 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:06.982 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@861 -- # return 0 00:23:06.982 00:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:23:06.982 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:06.982 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:23:06.982 00:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:06.982 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.982 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:23:06.982 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.982 00:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:23:06.982 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.982 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:23:07.241 00:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:19.447 Initializing NVMe Controllers 00:23:19.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:19.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:19.447 Initialization complete. Launching workers. 00:23:19.447 ======================================================== 00:23:19.447 Latency(us) 00:23:19.447 Device Information : IOPS MiB/s Average min max 00:23:19.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15038.90 58.75 4257.22 758.38 20289.85 00:23:19.447 ======================================================== 00:23:19.447 Total : 15038.90 58.75 4257.22 758.38 20289.85 00:23:19.447 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.447 rmmod nvme_tcp 00:23:19.447 rmmod nvme_fabrics 00:23:19.447 rmmod nvme_keyring 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 77469 ']' 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 77469 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' -z 77469 ']' 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # kill -0 77469 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # uname 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 77469 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # process_name=nvmf 00:23:19.447 killing process with pid 77469 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@957 -- # '[' nvmf = sudo ']' 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # echo 'killing process with pid 77469' 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # kill 77469 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@971 -- # wait 77469 00:23:19.447 nvmf threads initialize successfully 00:23:19.447 bdev subsystem init successfully 00:23:19.447 created a nvmf target service 00:23:19.447 create targets's poll groups done 00:23:19.447 all subsystems of target started 00:23:19.447 nvmf target is running 00:23:19.447 all subsystems of target stopped 00:23:19.447 destroy targets's poll groups done 00:23:19.447 destroyed the nvmf target service 00:23:19.447 bdev subsystem finish successfully 00:23:19.447 nvmf threads destroy successfully 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:23:19.447 00:23:19.447 real 0m12.374s 00:23:19.447 user 0m44.621s 00:23:19.447 sys 0m1.919s 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:19.447 00:49:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:23:19.447 ************************************ 00:23:19.447 END TEST nvmf_example 00:23:19.447 ************************************ 00:23:19.447 00:49:20 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:23:19.447 00:49:20 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:19.447 00:49:20 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:19.447 00:49:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.447 ************************************ 00:23:19.447 START TEST nvmf_filesystem 00:23:19.447 ************************************ 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:23:19.447 * Looking for test storage... 00:23:19.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:23:19.447 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:23:19.448 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:23:19.448 #define SPDK_CONFIG_H 00:23:19.448 #define SPDK_CONFIG_APPS 1 00:23:19.448 #define SPDK_CONFIG_ARCH native 00:23:19.448 #undef SPDK_CONFIG_ASAN 00:23:19.448 #define SPDK_CONFIG_AVAHI 1 00:23:19.448 #undef SPDK_CONFIG_CET 00:23:19.448 #define SPDK_CONFIG_COVERAGE 1 00:23:19.448 #define SPDK_CONFIG_CROSS_PREFIX 00:23:19.448 #undef SPDK_CONFIG_CRYPTO 00:23:19.448 #undef SPDK_CONFIG_CRYPTO_MLX5 00:23:19.448 #undef SPDK_CONFIG_CUSTOMOCF 00:23:19.448 #undef SPDK_CONFIG_DAOS 00:23:19.448 #define SPDK_CONFIG_DAOS_DIR 00:23:19.448 #define SPDK_CONFIG_DEBUG 1 00:23:19.448 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:23:19.448 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:23:19.448 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:23:19.448 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:23:19.448 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:23:19.448 #undef SPDK_CONFIG_DPDK_UADK 00:23:19.448 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:19.448 #define SPDK_CONFIG_EXAMPLES 1 00:23:19.448 #undef SPDK_CONFIG_FC 00:23:19.448 #define SPDK_CONFIG_FC_PATH 00:23:19.448 #define SPDK_CONFIG_FIO_PLUGIN 1 00:23:19.448 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:23:19.448 #undef SPDK_CONFIG_FUSE 00:23:19.448 #undef SPDK_CONFIG_FUZZER 00:23:19.448 #define SPDK_CONFIG_FUZZER_LIB 00:23:19.448 #define SPDK_CONFIG_GOLANG 1 00:23:19.448 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:23:19.448 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:23:19.448 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:23:19.448 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:23:19.448 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:23:19.448 #undef SPDK_CONFIG_HAVE_LIBBSD 00:23:19.448 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:23:19.448 #define SPDK_CONFIG_IDXD 1 00:23:19.448 #undef SPDK_CONFIG_IDXD_KERNEL 00:23:19.448 #undef SPDK_CONFIG_IPSEC_MB 00:23:19.448 #define SPDK_CONFIG_IPSEC_MB_DIR 00:23:19.448 #define SPDK_CONFIG_ISAL 1 00:23:19.448 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:23:19.448 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:23:19.448 #define SPDK_CONFIG_LIBDIR 00:23:19.448 #undef SPDK_CONFIG_LTO 00:23:19.448 #define SPDK_CONFIG_MAX_LCORES 00:23:19.448 #define SPDK_CONFIG_NVME_CUSE 1 00:23:19.448 #undef SPDK_CONFIG_OCF 00:23:19.448 #define SPDK_CONFIG_OCF_PATH 00:23:19.448 #define SPDK_CONFIG_OPENSSL_PATH 00:23:19.448 #undef SPDK_CONFIG_PGO_CAPTURE 00:23:19.448 #define SPDK_CONFIG_PGO_DIR 00:23:19.448 #undef SPDK_CONFIG_PGO_USE 00:23:19.449 #define SPDK_CONFIG_PREFIX /usr/local 00:23:19.449 #undef SPDK_CONFIG_RAID5F 00:23:19.449 #undef SPDK_CONFIG_RBD 00:23:19.449 #define SPDK_CONFIG_RDMA 1 00:23:19.449 #define SPDK_CONFIG_RDMA_PROV verbs 00:23:19.449 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:23:19.449 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:23:19.449 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:23:19.449 #define SPDK_CONFIG_SHARED 1 00:23:19.449 #undef SPDK_CONFIG_SMA 00:23:19.449 #define SPDK_CONFIG_TESTS 1 00:23:19.449 #undef SPDK_CONFIG_TSAN 00:23:19.449 #define SPDK_CONFIG_UBLK 1 00:23:19.449 #define SPDK_CONFIG_UBSAN 1 00:23:19.449 #undef SPDK_CONFIG_UNIT_TESTS 00:23:19.449 #undef SPDK_CONFIG_URING 00:23:19.449 #define SPDK_CONFIG_URING_PATH 00:23:19.449 #undef SPDK_CONFIG_URING_ZNS 00:23:19.449 #define SPDK_CONFIG_USDT 1 00:23:19.449 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:23:19.449 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:23:19.449 #undef SPDK_CONFIG_VFIO_USER 00:23:19.449 #define SPDK_CONFIG_VFIO_USER_DIR 00:23:19.449 #define SPDK_CONFIG_VHOST 1 00:23:19.449 #define SPDK_CONFIG_VIRTIO 1 00:23:19.449 #undef SPDK_CONFIG_VTUNE 00:23:19.449 #define SPDK_CONFIG_VTUNE_DIR 00:23:19.449 #define SPDK_CONFIG_WERROR 1 00:23:19.449 #define SPDK_CONFIG_WPDK_DIR 00:23:19.449 #undef SPDK_CONFIG_XNVME 00:23:19.449 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:23:19.449 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /home/vagrant/spdk_repo/dpdk/build 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:23:19.450 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 77716 ]] 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 77716 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.pAkgID 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.pAkgID/tests/target /tmp/spdk.pAkgID 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264512512 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12014575616 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5953888256 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12014575616 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5953888256 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267752448 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=139264 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92506906624 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=7195873280 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:19.451 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:23:19.451 * Looking for test storage... 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=12014575616 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:19.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set -o errtrace 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # true 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # xtrace_fd 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:19.452 Cannot find device "nvmf_tgt_br" 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:19.452 Cannot find device "nvmf_tgt_br2" 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:19.452 Cannot find device "nvmf_tgt_br" 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:19.452 Cannot find device "nvmf_tgt_br2" 00:23:19.452 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:19.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:19.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:19.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:23:19.453 00:23:19.453 --- 10.0.0.2 ping statistics --- 00:23:19.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.453 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:19.453 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:19.453 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:23:19.453 00:23:19.453 --- 10.0.0.3 ping statistics --- 00:23:19.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.453 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:19.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:23:19.453 00:23:19.453 --- 10.0.0.1 ping statistics --- 00:23:19.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.453 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:23:19.453 ************************************ 00:23:19.453 START TEST nvmf_filesystem_no_in_capsule 00:23:19.453 ************************************ 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 0 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=77875 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 77875 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 77875 ']' 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:19.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:19.453 00:49:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:19.453 [2024-05-15 00:49:21.693256] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:19.453 [2024-05-15 00:49:21.693390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.453 [2024-05-15 00:49:21.835144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:19.453 [2024-05-15 00:49:21.929519] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.453 [2024-05-15 00:49:21.929572] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.453 [2024-05-15 00:49:21.929600] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.453 [2024-05-15 00:49:21.929608] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.453 [2024-05-15 00:49:21.929641] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.453 [2024-05-15 00:49:21.929930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.453 [2024-05-15 00:49:21.930080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.453 [2024-05-15 00:49:21.930314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.453 [2024-05-15 00:49:21.930319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:19.453 [2024-05-15 00:49:22.691138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:19.453 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:19.713 Malloc1 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:19.713 [2024-05-15 00:49:22.879780] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:19.713 [2024-05-15 00:49:22.880112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:23:19.713 { 00:23:19.713 "aliases": [ 00:23:19.713 "cff9d295-257c-4170-a01b-70b1e037c9f1" 00:23:19.713 ], 00:23:19.713 "assigned_rate_limits": { 00:23:19.713 "r_mbytes_per_sec": 0, 00:23:19.713 "rw_ios_per_sec": 0, 00:23:19.713 "rw_mbytes_per_sec": 0, 00:23:19.713 "w_mbytes_per_sec": 0 00:23:19.713 }, 00:23:19.713 "block_size": 512, 00:23:19.713 "claim_type": "exclusive_write", 00:23:19.713 "claimed": true, 00:23:19.713 "driver_specific": {}, 00:23:19.713 "memory_domains": [ 00:23:19.713 { 00:23:19.713 "dma_device_id": "system", 00:23:19.713 "dma_device_type": 1 00:23:19.713 }, 00:23:19.713 { 00:23:19.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.713 "dma_device_type": 2 00:23:19.713 } 00:23:19.713 ], 00:23:19.713 "name": "Malloc1", 00:23:19.713 "num_blocks": 1048576, 00:23:19.713 "product_name": "Malloc disk", 00:23:19.713 "supported_io_types": { 00:23:19.713 "abort": true, 00:23:19.713 "compare": false, 00:23:19.713 "compare_and_write": false, 00:23:19.713 "flush": true, 00:23:19.713 "nvme_admin": false, 00:23:19.713 "nvme_io": false, 00:23:19.713 "read": true, 00:23:19.713 "reset": true, 00:23:19.713 "unmap": true, 00:23:19.713 "write": true, 00:23:19.713 "write_zeroes": true 00:23:19.713 }, 00:23:19.713 "uuid": "cff9d295-257c-4170-a01b-70b1e037c9f1", 00:23:19.713 "zoned": false 00:23:19.713 } 00:23:19.713 ]' 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:23:19.713 00:49:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:23:19.972 00:49:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:23:19.972 00:49:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:23:19.972 00:49:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:23:19.972 00:49:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:23:19.972 00:49:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:19.972 00:49:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:23:19.972 00:49:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:23:19.972 00:49:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:23:19.972 00:49:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:23:19.972 00:49:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:23:22.508 00:49:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:23.444 ************************************ 00:23:23.444 START TEST filesystem_ext4 00:23:23.444 ************************************ 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local force 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:23:23.444 mke2fs 1.46.5 (30-Dec-2021) 00:23:23.444 Discarding device blocks: 0/522240 done 00:23:23.444 Creating filesystem with 522240 1k blocks and 130560 inodes 00:23:23.444 Filesystem UUID: b8c3005b-e1dc-45fa-b5cc-04411a83591b 00:23:23.444 Superblock backups stored on blocks: 00:23:23.444 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:23:23.444 00:23:23.444 Allocating group tables: 0/64 done 00:23:23.444 Writing inode tables: 0/64 done 00:23:23.444 Creating journal (8192 blocks): done 00:23:23.444 Writing superblocks and filesystem accounting information: 0/64 done 00:23:23.444 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@942 -- # return 0 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:23:23.444 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 77875 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:23:23.702 ************************************ 00:23:23.702 END TEST filesystem_ext4 00:23:23.702 ************************************ 00:23:23.702 00:23:23.702 real 0m0.366s 00:23:23.702 user 0m0.015s 00:23:23.702 sys 0m0.063s 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 ************************************ 00:23:23.702 START TEST filesystem_btrfs 00:23:23.702 ************************************ 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local force 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:23:23.702 btrfs-progs v6.6.2 00:23:23.702 See https://btrfs.readthedocs.io for more information. 00:23:23.702 00:23:23.702 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:23:23.702 NOTE: several default settings have changed in version 5.15, please make sure 00:23:23.702 this does not affect your deployments: 00:23:23.702 - DUP for metadata (-m dup) 00:23:23.702 - enabled no-holes (-O no-holes) 00:23:23.702 - enabled free-space-tree (-R free-space-tree) 00:23:23.702 00:23:23.702 Label: (null) 00:23:23.702 UUID: 57b36d5c-2d9f-4c79-bcf4-6d3e33328ca5 00:23:23.702 Node size: 16384 00:23:23.702 Sector size: 4096 00:23:23.702 Filesystem size: 510.00MiB 00:23:23.702 Block group profiles: 00:23:23.702 Data: single 8.00MiB 00:23:23.702 Metadata: DUP 32.00MiB 00:23:23.702 System: DUP 8.00MiB 00:23:23.702 SSD detected: yes 00:23:23.702 Zoned device: no 00:23:23.702 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:23:23.702 Runtime features: free-space-tree 00:23:23.702 Checksum: crc32c 00:23:23.702 Number of devices: 1 00:23:23.702 Devices: 00:23:23.702 ID SIZE PATH 00:23:23.702 1 510.00MiB /dev/nvme0n1p1 00:23:23.702 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@942 -- # return 0 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:23:23.702 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:23:23.960 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:23:23.960 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:23:23.960 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:23:23.960 00:49:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 77875 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:23:23.960 00:23:23.960 real 0m0.230s 00:23:23.960 user 0m0.023s 00:23:23.960 sys 0m0.059s 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:23.960 ************************************ 00:23:23.960 END TEST filesystem_btrfs 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:23:23.960 ************************************ 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:23.960 ************************************ 00:23:23.960 START TEST filesystem_xfs 00:23:23.960 ************************************ 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local i=0 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local force 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # force=-f 00:23:23.960 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:23:23.960 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:23:23.960 = sectsz=512 attr=2, projid32bit=1 00:23:23.960 = crc=1 finobt=1, sparse=1, rmapbt=0 00:23:23.960 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:23:23.960 data = bsize=4096 blocks=130560, imaxpct=25 00:23:23.960 = sunit=0 swidth=0 blks 00:23:23.960 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:23:23.960 log =internal log bsize=4096 blocks=16384, version=2 00:23:23.960 = sectsz=512 sunit=0 blks, lazy-count=1 00:23:23.960 realtime =none extsz=4096 blocks=0, rtextents=0 00:23:24.892 Discarding blocks...Done. 00:23:24.892 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@942 -- # return 0 00:23:24.892 00:49:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:23:27.418 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:23:27.418 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:23:27.418 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:23:27.418 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:23:27.418 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:23:27.418 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:23:27.418 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 77875 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:23:27.419 00:23:27.419 real 0m3.098s 00:23:27.419 user 0m0.025s 00:23:27.419 sys 0m0.052s 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:23:27.419 ************************************ 00:23:27.419 END TEST filesystem_xfs 00:23:27.419 ************************************ 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:27.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 77875 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 77875 ']' 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # kill -0 77875 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # uname 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 77875 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:27.419 killing process with pid 77875 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 77875' 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # kill 77875 00:23:27.419 [2024-05-15 00:49:30.350361] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:27.419 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # wait 77875 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:23:27.676 00:23:27.676 real 0m9.128s 00:23:27.676 user 0m34.341s 00:23:27.676 sys 0m1.683s 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:27.676 ************************************ 00:23:27.676 END TEST nvmf_filesystem_no_in_capsule 00:23:27.676 ************************************ 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:23:27.676 ************************************ 00:23:27.676 START TEST nvmf_filesystem_in_capsule 00:23:27.676 ************************************ 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 4096 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=78188 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 78188 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 78188 ']' 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:27.676 00:49:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:27.676 [2024-05-15 00:49:30.859385] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:27.676 [2024-05-15 00:49:30.859473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.945 [2024-05-15 00:49:31.002485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.945 [2024-05-15 00:49:31.086262] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.945 [2024-05-15 00:49:31.086320] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.945 [2024-05-15 00:49:31.086348] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.945 [2024-05-15 00:49:31.086356] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.945 [2024-05-15 00:49:31.086363] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.945 [2024-05-15 00:49:31.086527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.945 [2024-05-15 00:49:31.086666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.945 [2024-05-15 00:49:31.087280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.945 [2024-05-15 00:49:31.087290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:28.906 [2024-05-15 00:49:31.932633] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:28.906 00:49:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:28.906 Malloc1 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:28.906 [2024-05-15 00:49:32.104556] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:28.906 [2024-05-15 00:49:32.104860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:23:28.906 { 00:23:28.906 "aliases": [ 00:23:28.906 "cde1a192-a5c5-4748-9950-68b4727656c9" 00:23:28.906 ], 00:23:28.906 "assigned_rate_limits": { 00:23:28.906 "r_mbytes_per_sec": 0, 00:23:28.906 "rw_ios_per_sec": 0, 00:23:28.906 "rw_mbytes_per_sec": 0, 00:23:28.906 "w_mbytes_per_sec": 0 00:23:28.906 }, 00:23:28.906 "block_size": 512, 00:23:28.906 "claim_type": "exclusive_write", 00:23:28.906 "claimed": true, 00:23:28.906 "driver_specific": {}, 00:23:28.906 "memory_domains": [ 00:23:28.906 { 00:23:28.906 "dma_device_id": "system", 00:23:28.906 "dma_device_type": 1 00:23:28.906 }, 00:23:28.906 { 00:23:28.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.906 "dma_device_type": 2 00:23:28.906 } 00:23:28.906 ], 00:23:28.906 "name": "Malloc1", 00:23:28.906 "num_blocks": 1048576, 00:23:28.906 "product_name": "Malloc disk", 00:23:28.906 "supported_io_types": { 00:23:28.906 "abort": true, 00:23:28.906 "compare": false, 00:23:28.906 "compare_and_write": false, 00:23:28.906 "flush": true, 00:23:28.906 "nvme_admin": false, 00:23:28.906 "nvme_io": false, 00:23:28.906 "read": true, 00:23:28.906 "reset": true, 00:23:28.906 "unmap": true, 00:23:28.906 "write": true, 00:23:28.906 "write_zeroes": true 00:23:28.906 }, 00:23:28.906 "uuid": "cde1a192-a5c5-4748-9950-68b4727656c9", 00:23:28.906 "zoned": false 00:23:28.906 } 00:23:28.906 ]' 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:23:28.906 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:23:29.165 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:23:29.165 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:23:29.165 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:23:29.165 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:23:29.165 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:29.165 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:23:29.165 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:23:29.165 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:23:29.165 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:23:29.165 00:49:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:23:31.693 00:49:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:23:32.628 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:23:32.628 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:23:32.628 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:23:32.628 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:32.628 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:32.628 ************************************ 00:23:32.628 START TEST filesystem_in_capsule_ext4 00:23:32.628 ************************************ 00:23:32.628 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:23:32.628 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:23:32.628 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local force 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:23:32.629 mke2fs 1.46.5 (30-Dec-2021) 00:23:32.629 Discarding device blocks: 0/522240 done 00:23:32.629 Creating filesystem with 522240 1k blocks and 130560 inodes 00:23:32.629 Filesystem UUID: 97ec1dd3-873e-4bae-9dc5-7eb3f4dd570d 00:23:32.629 Superblock backups stored on blocks: 00:23:32.629 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:23:32.629 00:23:32.629 Allocating group tables: 0/64 done 00:23:32.629 Writing inode tables: 0/64 done 00:23:32.629 Creating journal (8192 blocks): done 00:23:32.629 Writing superblocks and filesystem accounting information: 0/64 done 00:23:32.629 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@942 -- # return 0 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:23:32.629 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 78188 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:23:32.885 ************************************ 00:23:32.885 END TEST filesystem_in_capsule_ext4 00:23:32.885 ************************************ 00:23:32.885 00:23:32.885 real 0m0.361s 00:23:32.885 user 0m0.023s 00:23:32.885 sys 0m0.061s 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:32.885 ************************************ 00:23:32.885 START TEST filesystem_in_capsule_btrfs 00:23:32.885 ************************************ 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local force 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:23:32.885 00:49:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:23:32.885 btrfs-progs v6.6.2 00:23:32.885 See https://btrfs.readthedocs.io for more information. 00:23:32.885 00:23:32.885 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:23:32.885 NOTE: several default settings have changed in version 5.15, please make sure 00:23:32.885 this does not affect your deployments: 00:23:32.885 - DUP for metadata (-m dup) 00:23:32.885 - enabled no-holes (-O no-holes) 00:23:32.886 - enabled free-space-tree (-R free-space-tree) 00:23:32.886 00:23:32.886 Label: (null) 00:23:32.886 UUID: c71a3b30-7370-4237-bcc8-5a8b0c5deb87 00:23:32.886 Node size: 16384 00:23:32.886 Sector size: 4096 00:23:32.886 Filesystem size: 510.00MiB 00:23:32.886 Block group profiles: 00:23:32.886 Data: single 8.00MiB 00:23:32.886 Metadata: DUP 32.00MiB 00:23:32.886 System: DUP 8.00MiB 00:23:32.886 SSD detected: yes 00:23:32.886 Zoned device: no 00:23:32.886 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:23:32.886 Runtime features: free-space-tree 00:23:32.886 Checksum: crc32c 00:23:32.886 Number of devices: 1 00:23:32.886 Devices: 00:23:32.886 ID SIZE PATH 00:23:32.886 1 510.00MiB /dev/nvme0n1p1 00:23:32.886 00:23:32.886 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@942 -- # return 0 00:23:32.886 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:23:32.886 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:23:32.886 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:23:32.886 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:23:32.886 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:23:33.143 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:23:33.143 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:23:33.143 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 78188 00:23:33.143 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:23:33.143 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:23:33.143 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:23:33.143 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:23:33.143 ************************************ 00:23:33.143 END TEST filesystem_in_capsule_btrfs 00:23:33.143 ************************************ 00:23:33.143 00:23:33.143 real 0m0.221s 00:23:33.143 user 0m0.025s 00:23:33.143 sys 0m0.064s 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:33.144 ************************************ 00:23:33.144 START TEST filesystem_in_capsule_xfs 00:23:33.144 ************************************ 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local i=0 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local force 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # force=-f 00:23:33.144 00:49:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:23:33.144 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:23:33.144 = sectsz=512 attr=2, projid32bit=1 00:23:33.144 = crc=1 finobt=1, sparse=1, rmapbt=0 00:23:33.144 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:23:33.144 data = bsize=4096 blocks=130560, imaxpct=25 00:23:33.144 = sunit=0 swidth=0 blks 00:23:33.144 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:23:33.144 log =internal log bsize=4096 blocks=16384, version=2 00:23:33.144 = sectsz=512 sunit=0 blks, lazy-count=1 00:23:33.144 realtime =none extsz=4096 blocks=0, rtextents=0 00:23:34.078 Discarding blocks...Done. 00:23:34.078 00:49:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@942 -- # return 0 00:23:34.078 00:49:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 78188 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:23:35.984 ************************************ 00:23:35.984 END TEST filesystem_in_capsule_xfs 00:23:35.984 ************************************ 00:23:35.984 00:23:35.984 real 0m2.625s 00:23:35.984 user 0m0.020s 00:23:35.984 sys 0m0.059s 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:23:35.984 00:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:35.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 78188 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 78188 ']' 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # kill -0 78188 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # uname 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 78188 00:23:35.984 killing process with pid 78188 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 78188' 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # kill 78188 00:23:35.984 [2024-05-15 00:49:39.147316] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:35.984 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # wait 78188 00:23:36.550 ************************************ 00:23:36.550 END TEST nvmf_filesystem_in_capsule 00:23:36.550 ************************************ 00:23:36.550 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:23:36.550 00:23:36.550 real 0m8.761s 00:23:36.550 user 0m33.141s 00:23:36.550 sys 0m1.628s 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.551 rmmod nvme_tcp 00:23:36.551 rmmod nvme_fabrics 00:23:36.551 rmmod nvme_keyring 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:36.551 00:23:36.551 real 0m18.725s 00:23:36.551 user 1m7.724s 00:23:36.551 sys 0m3.733s 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:36.551 00:49:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:23:36.551 ************************************ 00:23:36.551 END TEST nvmf_filesystem 00:23:36.551 ************************************ 00:23:36.551 00:49:39 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:23:36.551 00:49:39 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:36.551 00:49:39 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:36.551 00:49:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:36.551 ************************************ 00:23:36.551 START TEST nvmf_target_discovery 00:23:36.551 ************************************ 00:23:36.551 00:49:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:23:36.809 * Looking for test storage... 00:23:36.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:36.809 00:49:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:36.809 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:36.809 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.809 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.809 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.809 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.809 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.809 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.809 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:36.810 Cannot find device "nvmf_tgt_br" 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:36.810 Cannot find device "nvmf_tgt_br2" 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:36.810 Cannot find device "nvmf_tgt_br" 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:23:36.810 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:36.810 Cannot find device "nvmf_tgt_br2" 00:23:36.811 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:23:36.811 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:36.811 00:49:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:36.811 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:36.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:36.811 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:23:36.811 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:36.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:36.811 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:23:36.811 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:36.811 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:36.811 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:36.811 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:36.811 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:36.811 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:37.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:23:37.069 00:23:37.069 --- 10.0.0.2 ping statistics --- 00:23:37.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.069 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:37.069 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:37.069 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:23:37.069 00:23:37.069 --- 10.0.0.3 ping statistics --- 00:23:37.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.069 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:37.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:23:37.069 00:23:37.069 --- 10.0.0.1 ping statistics --- 00:23:37.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.069 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=78649 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 78649 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # '[' -z 78649 ']' 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:37.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:37.069 00:49:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.069 [2024-05-15 00:49:40.299986] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:37.069 [2024-05-15 00:49:40.300076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.329 [2024-05-15 00:49:40.436803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:37.329 [2024-05-15 00:49:40.536022] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.329 [2024-05-15 00:49:40.536321] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.329 [2024-05-15 00:49:40.536492] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.329 [2024-05-15 00:49:40.536549] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.329 [2024-05-15 00:49:40.536669] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.329 [2024-05-15 00:49:40.536817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.329 [2024-05-15 00:49:40.536986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.329 [2024-05-15 00:49:40.537110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.329 [2024-05-15 00:49:40.537307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@861 -- # return 0 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.267 [2024-05-15 00:49:41.345736] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.267 Null1 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.267 [2024-05-15 00:49:41.398501] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:38.267 [2024-05-15 00:49:41.398830] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.267 Null2 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.267 Null3 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:38.267 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.268 Null4 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.268 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -a 10.0.0.2 -s 4420 00:23:38.527 00:23:38.527 Discovery Log Number of Records 6, Generation counter 6 00:23:38.527 =====Discovery Log Entry 0====== 00:23:38.527 trtype: tcp 00:23:38.527 adrfam: ipv4 00:23:38.527 subtype: current discovery subsystem 00:23:38.527 treq: not required 00:23:38.527 portid: 0 00:23:38.527 trsvcid: 4420 00:23:38.527 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:38.527 traddr: 10.0.0.2 00:23:38.527 eflags: explicit discovery connections, duplicate discovery information 00:23:38.527 sectype: none 00:23:38.527 =====Discovery Log Entry 1====== 00:23:38.527 trtype: tcp 00:23:38.527 adrfam: ipv4 00:23:38.527 subtype: nvme subsystem 00:23:38.527 treq: not required 00:23:38.527 portid: 0 00:23:38.527 trsvcid: 4420 00:23:38.527 subnqn: nqn.2016-06.io.spdk:cnode1 00:23:38.527 traddr: 10.0.0.2 00:23:38.527 eflags: none 00:23:38.527 sectype: none 00:23:38.527 =====Discovery Log Entry 2====== 00:23:38.527 trtype: tcp 00:23:38.527 adrfam: ipv4 00:23:38.527 subtype: nvme subsystem 00:23:38.527 treq: not required 00:23:38.527 portid: 0 00:23:38.527 trsvcid: 4420 00:23:38.527 subnqn: nqn.2016-06.io.spdk:cnode2 00:23:38.527 traddr: 10.0.0.2 00:23:38.527 eflags: none 00:23:38.527 sectype: none 00:23:38.527 =====Discovery Log Entry 3====== 00:23:38.527 trtype: tcp 00:23:38.527 adrfam: ipv4 00:23:38.527 subtype: nvme subsystem 00:23:38.527 treq: not required 00:23:38.527 portid: 0 00:23:38.527 trsvcid: 4420 00:23:38.527 subnqn: nqn.2016-06.io.spdk:cnode3 00:23:38.527 traddr: 10.0.0.2 00:23:38.527 eflags: none 00:23:38.527 sectype: none 00:23:38.527 =====Discovery Log Entry 4====== 00:23:38.527 trtype: tcp 00:23:38.527 adrfam: ipv4 00:23:38.527 subtype: nvme subsystem 00:23:38.527 treq: not required 00:23:38.527 portid: 0 00:23:38.527 trsvcid: 4420 00:23:38.527 subnqn: nqn.2016-06.io.spdk:cnode4 00:23:38.527 traddr: 10.0.0.2 00:23:38.527 eflags: none 00:23:38.527 sectype: none 00:23:38.527 =====Discovery Log Entry 5====== 00:23:38.527 trtype: tcp 00:23:38.527 adrfam: ipv4 00:23:38.527 subtype: discovery subsystem referral 00:23:38.527 treq: not required 00:23:38.527 portid: 0 00:23:38.527 trsvcid: 4430 00:23:38.527 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:38.527 traddr: 10.0.0.2 00:23:38.527 eflags: none 00:23:38.527 sectype: none 00:23:38.527 Perform nvmf subsystem discovery via RPC 00:23:38.527 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:23:38.527 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:23:38.527 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.527 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.527 [ 00:23:38.527 { 00:23:38.527 "allow_any_host": true, 00:23:38.527 "hosts": [], 00:23:38.527 "listen_addresses": [ 00:23:38.527 { 00:23:38.527 "adrfam": "IPv4", 00:23:38.527 "traddr": "10.0.0.2", 00:23:38.527 "trsvcid": "4420", 00:23:38.527 "trtype": "TCP" 00:23:38.527 } 00:23:38.527 ], 00:23:38.527 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:38.527 "subtype": "Discovery" 00:23:38.527 }, 00:23:38.527 { 00:23:38.527 "allow_any_host": true, 00:23:38.527 "hosts": [], 00:23:38.527 "listen_addresses": [ 00:23:38.527 { 00:23:38.527 "adrfam": "IPv4", 00:23:38.527 "traddr": "10.0.0.2", 00:23:38.527 "trsvcid": "4420", 00:23:38.527 "trtype": "TCP" 00:23:38.527 } 00:23:38.527 ], 00:23:38.527 "max_cntlid": 65519, 00:23:38.527 "max_namespaces": 32, 00:23:38.527 "min_cntlid": 1, 00:23:38.527 "model_number": "SPDK bdev Controller", 00:23:38.527 "namespaces": [ 00:23:38.527 { 00:23:38.527 "bdev_name": "Null1", 00:23:38.527 "name": "Null1", 00:23:38.527 "nguid": "4723CE6C08C7425A9108DFF24DE9FF6F", 00:23:38.527 "nsid": 1, 00:23:38.527 "uuid": "4723ce6c-08c7-425a-9108-dff24de9ff6f" 00:23:38.527 } 00:23:38.527 ], 00:23:38.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.527 "serial_number": "SPDK00000000000001", 00:23:38.527 "subtype": "NVMe" 00:23:38.527 }, 00:23:38.527 { 00:23:38.527 "allow_any_host": true, 00:23:38.527 "hosts": [], 00:23:38.527 "listen_addresses": [ 00:23:38.527 { 00:23:38.527 "adrfam": "IPv4", 00:23:38.527 "traddr": "10.0.0.2", 00:23:38.527 "trsvcid": "4420", 00:23:38.527 "trtype": "TCP" 00:23:38.527 } 00:23:38.527 ], 00:23:38.527 "max_cntlid": 65519, 00:23:38.527 "max_namespaces": 32, 00:23:38.527 "min_cntlid": 1, 00:23:38.527 "model_number": "SPDK bdev Controller", 00:23:38.527 "namespaces": [ 00:23:38.527 { 00:23:38.527 "bdev_name": "Null2", 00:23:38.527 "name": "Null2", 00:23:38.527 "nguid": "50C3AEFFBA15479488488A474DFA62F2", 00:23:38.527 "nsid": 1, 00:23:38.527 "uuid": "50c3aeff-ba15-4794-8848-8a474dfa62f2" 00:23:38.527 } 00:23:38.527 ], 00:23:38.527 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:23:38.527 "serial_number": "SPDK00000000000002", 00:23:38.527 "subtype": "NVMe" 00:23:38.527 }, 00:23:38.527 { 00:23:38.527 "allow_any_host": true, 00:23:38.527 "hosts": [], 00:23:38.527 "listen_addresses": [ 00:23:38.527 { 00:23:38.527 "adrfam": "IPv4", 00:23:38.527 "traddr": "10.0.0.2", 00:23:38.527 "trsvcid": "4420", 00:23:38.527 "trtype": "TCP" 00:23:38.527 } 00:23:38.527 ], 00:23:38.527 "max_cntlid": 65519, 00:23:38.527 "max_namespaces": 32, 00:23:38.527 "min_cntlid": 1, 00:23:38.527 "model_number": "SPDK bdev Controller", 00:23:38.527 "namespaces": [ 00:23:38.527 { 00:23:38.527 "bdev_name": "Null3", 00:23:38.527 "name": "Null3", 00:23:38.527 "nguid": "C4679D62C4A64BE8A74711AA9F2DC00A", 00:23:38.527 "nsid": 1, 00:23:38.527 "uuid": "c4679d62-c4a6-4be8-a747-11aa9f2dc00a" 00:23:38.527 } 00:23:38.527 ], 00:23:38.527 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:23:38.527 "serial_number": "SPDK00000000000003", 00:23:38.527 "subtype": "NVMe" 00:23:38.527 }, 00:23:38.527 { 00:23:38.527 "allow_any_host": true, 00:23:38.527 "hosts": [], 00:23:38.527 "listen_addresses": [ 00:23:38.527 { 00:23:38.527 "adrfam": "IPv4", 00:23:38.527 "traddr": "10.0.0.2", 00:23:38.527 "trsvcid": "4420", 00:23:38.527 "trtype": "TCP" 00:23:38.527 } 00:23:38.527 ], 00:23:38.527 "max_cntlid": 65519, 00:23:38.527 "max_namespaces": 32, 00:23:38.527 "min_cntlid": 1, 00:23:38.527 "model_number": "SPDK bdev Controller", 00:23:38.527 "namespaces": [ 00:23:38.527 { 00:23:38.527 "bdev_name": "Null4", 00:23:38.527 "name": "Null4", 00:23:38.527 "nguid": "AF6954B3299B4580A0D27303B68A11C2", 00:23:38.527 "nsid": 1, 00:23:38.527 "uuid": "af6954b3-299b-4580-a0d2-7303b68a11c2" 00:23:38.527 } 00:23:38.527 ], 00:23:38.527 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:23:38.527 "serial_number": "SPDK00000000000004", 00:23:38.527 "subtype": "NVMe" 00:23:38.527 } 00:23:38.527 ] 00:23:38.527 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.527 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:23:38.527 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:23:38.527 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:38.528 00:49:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:38.528 rmmod nvme_tcp 00:23:38.528 rmmod nvme_fabrics 00:23:38.789 rmmod nvme_keyring 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 78649 ']' 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 78649 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' -z 78649 ']' 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # kill -0 78649 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # uname 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 78649 00:23:38.789 killing process with pid 78649 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 78649' 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # kill 78649 00:23:38.789 [2024-05-15 00:49:41.879019] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:38.789 00:49:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@971 -- # wait 78649 00:23:39.094 00:49:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:39.094 00:49:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:39.094 00:49:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:39.094 00:49:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:39.094 00:49:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:39.094 00:49:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.094 00:49:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.094 00:49:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.094 00:49:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:39.094 00:23:39.094 real 0m2.356s 00:23:39.094 user 0m6.509s 00:23:39.094 sys 0m0.646s 00:23:39.094 00:49:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:39.094 00:49:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.094 ************************************ 00:23:39.094 END TEST nvmf_target_discovery 00:23:39.094 ************************************ 00:23:39.094 00:49:42 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:23:39.094 00:49:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:39.094 00:49:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:39.094 00:49:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.094 ************************************ 00:23:39.094 START TEST nvmf_referrals 00:23:39.094 ************************************ 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:23:39.094 * Looking for test storage... 00:23:39.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:39.094 Cannot find device "nvmf_tgt_br" 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:39.094 Cannot find device "nvmf_tgt_br2" 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:39.094 Cannot find device "nvmf_tgt_br" 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:23:39.094 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:39.352 Cannot find device "nvmf_tgt_br2" 00:23:39.352 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:23:39.352 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:39.352 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:39.352 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:39.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:39.352 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:23:39.352 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:39.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:39.352 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:23:39.352 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:39.352 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:39.352 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:39.352 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:39.352 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:39.352 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:39.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:23:39.353 00:23:39.353 --- 10.0.0.2 ping statistics --- 00:23:39.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.353 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:39.353 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:39.353 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:23:39.353 00:23:39.353 --- 10.0.0.3 ping statistics --- 00:23:39.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.353 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:39.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:23:39.353 00:23:39.353 --- 10.0.0.1 ping statistics --- 00:23:39.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.353 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=78874 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 78874 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # '[' -z 78874 ']' 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:39.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:39.353 00:49:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:39.611 [2024-05-15 00:49:42.685459] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:39.611 [2024-05-15 00:49:42.685642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.611 [2024-05-15 00:49:42.832082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.870 [2024-05-15 00:49:42.934951] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.870 [2024-05-15 00:49:42.935013] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.870 [2024-05-15 00:49:42.935037] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.870 [2024-05-15 00:49:42.935046] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.870 [2024-05-15 00:49:42.935061] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.870 [2024-05-15 00:49:42.935250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.870 [2024-05-15 00:49:42.935386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.870 [2024-05-15 00:49:42.936039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.870 [2024-05-15 00:49:42.936079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@861 -- # return 0 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.437 [2024-05-15 00:49:43.688562] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.437 [2024-05-15 00:49:43.708179] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:40.437 [2024-05-15 00:49:43.708453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.437 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.696 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.697 00:49:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:23:40.956 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:41.216 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:23:41.475 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:41.734 rmmod nvme_tcp 00:23:41.734 rmmod nvme_fabrics 00:23:41.734 rmmod nvme_keyring 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 78874 ']' 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 78874 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' -z 78874 ']' 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # kill -0 78874 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # uname 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 78874 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:41.734 killing process with pid 78874 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # echo 'killing process with pid 78874' 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # kill 78874 00:23:41.734 [2024-05-15 00:49:44.940372] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:41.734 00:49:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@971 -- # wait 78874 00:23:41.994 00:49:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:41.994 00:49:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:41.994 00:49:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:41.994 00:49:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:41.994 00:49:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:41.994 00:49:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.994 00:49:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.994 00:49:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.994 00:49:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:41.994 00:23:41.994 real 0m3.000s 00:23:41.994 user 0m9.773s 00:23:41.994 sys 0m0.863s 00:23:41.994 00:49:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:41.994 00:49:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:23:41.994 ************************************ 00:23:41.994 END TEST nvmf_referrals 00:23:41.994 ************************************ 00:23:41.994 00:49:45 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:23:41.994 00:49:45 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:41.994 00:49:45 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:41.994 00:49:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:41.994 ************************************ 00:23:41.994 START TEST nvmf_connect_disconnect 00:23:41.994 ************************************ 00:23:41.994 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:23:42.253 * Looking for test storage... 00:23:42.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:42.253 Cannot find device "nvmf_tgt_br" 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:42.253 Cannot find device "nvmf_tgt_br2" 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:23:42.253 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:42.254 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:42.254 Cannot find device "nvmf_tgt_br" 00:23:42.254 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:23:42.254 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:42.254 Cannot find device "nvmf_tgt_br2" 00:23:42.254 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:23:42.254 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:42.254 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:42.254 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:42.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:42.254 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:23:42.254 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:42.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:42.254 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:23:42.254 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:42.254 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:42.254 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:42.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:23:42.516 00:23:42.516 --- 10.0.0.2 ping statistics --- 00:23:42.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.516 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:42.516 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:42.516 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:23:42.516 00:23:42.516 --- 10.0.0.3 ping statistics --- 00:23:42.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.516 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:42.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:23:42.516 00:23:42.516 --- 10.0.0.1 ping statistics --- 00:23:42.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.516 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=79176 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 79176 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # '[' -z 79176 ']' 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:42.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:42.516 00:49:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:42.775 [2024-05-15 00:49:45.802323] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:42.775 [2024-05-15 00:49:45.802454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.775 [2024-05-15 00:49:45.942701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.775 [2024-05-15 00:49:46.043176] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.775 [2024-05-15 00:49:46.043240] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.775 [2024-05-15 00:49:46.043256] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.775 [2024-05-15 00:49:46.043270] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.775 [2024-05-15 00:49:46.043282] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.775 [2024-05-15 00:49:46.043393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.775 [2024-05-15 00:49:46.043787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.775 [2024-05-15 00:49:46.044760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.775 [2024-05-15 00:49:46.044771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@861 -- # return 0 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:43.708 [2024-05-15 00:49:46.878174] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.708 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:43.708 [2024-05-15 00:49:46.952644] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:43.708 [2024-05-15 00:49:46.953018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.709 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.709 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:23:43.709 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:23:43.709 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:23:43.709 00:49:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:23:46.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:48.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:50.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:52.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:55.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:57.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:59.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:01.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:03.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:06.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:08.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:10.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:12.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:15.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:17.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:19.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:21.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:24.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:26.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:28.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:30.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:33.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:35.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:37.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:40.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:42.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:44.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:46.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:49.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:51.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:53.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:55.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:58.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:59.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:02.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:04.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:06.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:08.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:11.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:13.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:15.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:17.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:20.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:22.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:24.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:27.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:29.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:31.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:33.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:36.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:38.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:40.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:42.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:45.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:46.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:49.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:51.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:53.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:55.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:58.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:00.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:02.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:04.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:07.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:09.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:11.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:13.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:16.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:18.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:20.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:23.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:25.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:27.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:29.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:31.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:33.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:36.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:38.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:40.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:42.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:45.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:47.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:49.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:51.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:54.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:56.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:58.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:00.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:03.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:04.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:07.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:09.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:11.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:13.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:16.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:18.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:20.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:22.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:25.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:27.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:27.105 rmmod nvme_tcp 00:27:27.105 rmmod nvme_fabrics 00:27:27.105 rmmod nvme_keyring 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 79176 ']' 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 79176 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' -z 79176 ']' 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # kill -0 79176 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # uname 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:27.105 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 79176 00:27:27.362 killing process with pid 79176 00:27:27.362 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:27.362 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:27.362 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 79176' 00:27:27.362 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # kill 79176 00:27:27.362 [2024-05-15 00:53:30.397266] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:27.362 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # wait 79176 00:27:27.621 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:27.621 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:27.621 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:27.621 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.621 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:27.621 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.621 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.621 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.621 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:27.621 00:27:27.621 real 3m45.457s 00:27:27.621 user 14m32.534s 00:27:27.621 sys 0m26.806s 00:27:27.621 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:27.621 ************************************ 00:27:27.621 END TEST nvmf_connect_disconnect 00:27:27.621 00:53:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:27.621 ************************************ 00:27:27.621 00:53:30 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:27:27.621 00:53:30 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:27.621 00:53:30 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:27.621 00:53:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:27.621 ************************************ 00:27:27.621 START TEST nvmf_multitarget 00:27:27.621 ************************************ 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:27:27.621 * Looking for test storage... 00:27:27.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.621 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:27.622 Cannot find device "nvmf_tgt_br" 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:27:27.622 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:27.880 Cannot find device "nvmf_tgt_br2" 00:27:27.880 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:27:27.880 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:27.880 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:27.880 Cannot find device "nvmf_tgt_br" 00:27:27.880 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:27:27.880 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:27.880 Cannot find device "nvmf_tgt_br2" 00:27:27.880 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:27:27.880 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:27.880 00:53:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:27.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:27.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:27.880 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:28.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:27:28.139 00:27:28.139 --- 10.0.0.2 ping statistics --- 00:27:28.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.139 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:28.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:28.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:27:28.139 00:27:28.139 --- 10.0.0.3 ping statistics --- 00:27:28.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.139 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:28.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:27:28.139 00:27:28.139 --- 10.0.0.1 ping statistics --- 00:27:28.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.139 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:27:28.139 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=82944 00:27:28.140 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:28.140 00:53:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 82944 00:27:28.140 00:53:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # '[' -z 82944 ']' 00:27:28.140 00:53:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.140 00:53:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:28.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.140 00:53:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.140 00:53:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:28.140 00:53:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:27:28.140 [2024-05-15 00:53:31.319796] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:27:28.140 [2024-05-15 00:53:31.319898] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.398 [2024-05-15 00:53:31.455863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:28.398 [2024-05-15 00:53:31.557516] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.398 [2024-05-15 00:53:31.557586] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.398 [2024-05-15 00:53:31.557628] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.398 [2024-05-15 00:53:31.557641] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.398 [2024-05-15 00:53:31.557649] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.398 [2024-05-15 00:53:31.557965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.398 [2024-05-15 00:53:31.558053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.398 [2024-05-15 00:53:31.558378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.398 [2024-05-15 00:53:31.558415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.332 00:53:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:29.332 00:53:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@861 -- # return 0 00:27:29.332 00:53:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:29.332 00:53:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:29.332 00:53:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:27:29.332 00:53:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.332 00:53:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:29.332 00:53:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:27:29.332 00:53:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:27:29.332 00:53:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:27:29.332 00:53:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:27:29.591 "nvmf_tgt_1" 00:27:29.591 00:53:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:27:29.591 "nvmf_tgt_2" 00:27:29.591 00:53:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:27:29.591 00:53:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:27:29.856 00:53:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:27:29.857 00:53:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:27:29.857 true 00:27:29.857 00:53:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:27:30.118 true 00:27:30.118 00:53:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:27:30.118 00:53:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:27:30.118 00:53:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:27:30.118 00:53:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:27:30.118 00:53:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:27:30.118 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:30.118 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:30.378 rmmod nvme_tcp 00:27:30.378 rmmod nvme_fabrics 00:27:30.378 rmmod nvme_keyring 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 82944 ']' 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 82944 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' -z 82944 ']' 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # kill -0 82944 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # uname 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 82944 00:27:30.378 killing process with pid 82944 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # echo 'killing process with pid 82944' 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # kill 82944 00:27:30.378 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@971 -- # wait 82944 00:27:30.637 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:30.637 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:30.637 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:30.637 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.637 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:30.637 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.637 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.637 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.637 00:53:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:30.637 ************************************ 00:27:30.637 END TEST nvmf_multitarget 00:27:30.637 ************************************ 00:27:30.637 00:27:30.637 real 0m3.013s 00:27:30.637 user 0m9.904s 00:27:30.637 sys 0m0.779s 00:27:30.637 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:30.637 00:53:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:27:30.637 00:53:33 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:27:30.637 00:53:33 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:30.637 00:53:33 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:30.637 00:53:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:30.637 ************************************ 00:27:30.637 START TEST nvmf_rpc 00:27:30.637 ************************************ 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:27:30.637 * Looking for test storage... 00:27:30.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.637 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:30.896 Cannot find device "nvmf_tgt_br" 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:30.896 Cannot find device "nvmf_tgt_br2" 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:30.896 Cannot find device "nvmf_tgt_br" 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:27:30.896 00:53:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:30.896 Cannot find device "nvmf_tgt_br2" 00:27:30.896 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:27:30.896 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:30.896 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:30.896 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:30.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:30.896 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:27:30.896 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:30.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:30.897 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:31.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:27:31.155 00:27:31.155 --- 10.0.0.2 ping statistics --- 00:27:31.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.155 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:31.155 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:31.155 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:27:31.155 00:27:31.155 --- 10.0.0.3 ping statistics --- 00:27:31.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.155 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:31.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:27:31.155 00:27:31.155 --- 10.0.0.1 ping statistics --- 00:27:31.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.155 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=83183 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 83183 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # '[' -z 83183 ']' 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:31.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:31.155 00:53:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:31.155 [2024-05-15 00:53:34.386018] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:27:31.155 [2024-05-15 00:53:34.386164] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.414 [2024-05-15 00:53:34.532747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:31.414 [2024-05-15 00:53:34.673511] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.414 [2024-05-15 00:53:34.673917] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.414 [2024-05-15 00:53:34.674092] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.414 [2024-05-15 00:53:34.674240] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.414 [2024-05-15 00:53:34.674293] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.414 [2024-05-15 00:53:34.674556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.414 [2024-05-15 00:53:34.674852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.414 [2024-05-15 00:53:34.674933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:31.414 [2024-05-15 00:53:34.674938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@861 -- # return 0 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:27:32.351 "poll_groups": [ 00:27:32.351 { 00:27:32.351 "admin_qpairs": 0, 00:27:32.351 "completed_nvme_io": 0, 00:27:32.351 "current_admin_qpairs": 0, 00:27:32.351 "current_io_qpairs": 0, 00:27:32.351 "io_qpairs": 0, 00:27:32.351 "name": "nvmf_tgt_poll_group_000", 00:27:32.351 "pending_bdev_io": 0, 00:27:32.351 "transports": [] 00:27:32.351 }, 00:27:32.351 { 00:27:32.351 "admin_qpairs": 0, 00:27:32.351 "completed_nvme_io": 0, 00:27:32.351 "current_admin_qpairs": 0, 00:27:32.351 "current_io_qpairs": 0, 00:27:32.351 "io_qpairs": 0, 00:27:32.351 "name": "nvmf_tgt_poll_group_001", 00:27:32.351 "pending_bdev_io": 0, 00:27:32.351 "transports": [] 00:27:32.351 }, 00:27:32.351 { 00:27:32.351 "admin_qpairs": 0, 00:27:32.351 "completed_nvme_io": 0, 00:27:32.351 "current_admin_qpairs": 0, 00:27:32.351 "current_io_qpairs": 0, 00:27:32.351 "io_qpairs": 0, 00:27:32.351 "name": "nvmf_tgt_poll_group_002", 00:27:32.351 "pending_bdev_io": 0, 00:27:32.351 "transports": [] 00:27:32.351 }, 00:27:32.351 { 00:27:32.351 "admin_qpairs": 0, 00:27:32.351 "completed_nvme_io": 0, 00:27:32.351 "current_admin_qpairs": 0, 00:27:32.351 "current_io_qpairs": 0, 00:27:32.351 "io_qpairs": 0, 00:27:32.351 "name": "nvmf_tgt_poll_group_003", 00:27:32.351 "pending_bdev_io": 0, 00:27:32.351 "transports": [] 00:27:32.351 } 00:27:32.351 ], 00:27:32.351 "tick_rate": 2200000000 00:27:32.351 }' 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.351 [2024-05-15 00:53:35.570795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.351 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.352 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:27:32.352 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.352 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.352 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.352 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:27:32.352 "poll_groups": [ 00:27:32.352 { 00:27:32.352 "admin_qpairs": 0, 00:27:32.352 "completed_nvme_io": 0, 00:27:32.352 "current_admin_qpairs": 0, 00:27:32.352 "current_io_qpairs": 0, 00:27:32.352 "io_qpairs": 0, 00:27:32.352 "name": "nvmf_tgt_poll_group_000", 00:27:32.352 "pending_bdev_io": 0, 00:27:32.352 "transports": [ 00:27:32.352 { 00:27:32.352 "trtype": "TCP" 00:27:32.352 } 00:27:32.352 ] 00:27:32.352 }, 00:27:32.352 { 00:27:32.352 "admin_qpairs": 0, 00:27:32.352 "completed_nvme_io": 0, 00:27:32.352 "current_admin_qpairs": 0, 00:27:32.352 "current_io_qpairs": 0, 00:27:32.352 "io_qpairs": 0, 00:27:32.352 "name": "nvmf_tgt_poll_group_001", 00:27:32.352 "pending_bdev_io": 0, 00:27:32.352 "transports": [ 00:27:32.352 { 00:27:32.352 "trtype": "TCP" 00:27:32.352 } 00:27:32.352 ] 00:27:32.352 }, 00:27:32.352 { 00:27:32.352 "admin_qpairs": 0, 00:27:32.352 "completed_nvme_io": 0, 00:27:32.352 "current_admin_qpairs": 0, 00:27:32.352 "current_io_qpairs": 0, 00:27:32.352 "io_qpairs": 0, 00:27:32.352 "name": "nvmf_tgt_poll_group_002", 00:27:32.352 "pending_bdev_io": 0, 00:27:32.352 "transports": [ 00:27:32.352 { 00:27:32.352 "trtype": "TCP" 00:27:32.352 } 00:27:32.352 ] 00:27:32.352 }, 00:27:32.352 { 00:27:32.352 "admin_qpairs": 0, 00:27:32.352 "completed_nvme_io": 0, 00:27:32.352 "current_admin_qpairs": 0, 00:27:32.352 "current_io_qpairs": 0, 00:27:32.352 "io_qpairs": 0, 00:27:32.352 "name": "nvmf_tgt_poll_group_003", 00:27:32.352 "pending_bdev_io": 0, 00:27:32.352 "transports": [ 00:27:32.352 { 00:27:32.352 "trtype": "TCP" 00:27:32.352 } 00:27:32.352 ] 00:27:32.352 } 00:27:32.352 ], 00:27:32.352 "tick_rate": 2200000000 00:27:32.352 }' 00:27:32.352 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:27:32.352 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:27:32.352 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:27:32.352 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.611 Malloc1 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.611 [2024-05-15 00:53:35.800235] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:32.611 [2024-05-15 00:53:35.800583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -a 10.0.0.2 -s 4420 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -a 10.0.0.2 -s 4420 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -a 10.0.0.2 -s 4420 00:27:32.611 [2024-05-15 00:53:35.828921] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5' 00:27:32.611 Failed to write to /dev/nvme-fabrics: Input/output error 00:27:32.611 could not add new controller: failed to write to nvme-fabrics device 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.611 00:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:32.869 00:53:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:27:32.869 00:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:27:32.869 00:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:27:32.869 00:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:27:32.869 00:53:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:27:34.781 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:27:34.781 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:27:34.781 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:27:34.781 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:27:34.781 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:27:34.781 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:27:34.781 00:53:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:35.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:35.040 [2024-05-15 00:53:38.240483] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5' 00:27:35.040 Failed to write to /dev/nvme-fabrics: Input/output error 00:27:35.040 could not add new controller: failed to write to nvme-fabrics device 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.040 00:53:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:35.299 00:53:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:27:35.299 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:27:35.299 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:27:35.299 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:27:35.299 00:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:27:37.199 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:27:37.199 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:27:37.199 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:27:37.199 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:27:37.199 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:27:37.199 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:27:37.199 00:53:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:37.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:37.507 [2024-05-15 00:53:40.634571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.507 00:53:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:37.765 00:53:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:27:37.765 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:27:37.765 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:27:37.765 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:27:37.765 00:53:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:27:39.665 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:27:39.665 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:27:39.665 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:27:39.665 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:27:39.665 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:27:39.665 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:27:39.665 00:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:39.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:39.665 00:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:39.666 [2024-05-15 00:53:42.943029] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.666 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:39.924 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.924 00:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:39.924 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.924 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:39.924 00:53:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.924 00:53:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:39.924 00:53:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:27:39.924 00:53:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:27:39.924 00:53:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:27:39.924 00:53:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:27:39.924 00:53:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:42.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:42.464 [2024-05-15 00:53:45.340297] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:27:42.464 00:53:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:44.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.367 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:44.368 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.368 00:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:27:44.368 00:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:44.368 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.368 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:44.626 [2024-05-15 00:53:47.665501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:27:44.626 00:53:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:47.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:47.157 [2024-05-15 00:53:49.979175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.157 00:53:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:47.157 00:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.157 00:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:47.157 00:53:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:27:47.157 00:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:27:47.157 00:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:27:47.157 00:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:27:47.157 00:53:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:49.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 [2024-05-15 00:53:52.288093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 [2024-05-15 00:53:52.336222] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.061 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:49.320 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 [2024-05-15 00:53:52.384285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 [2024-05-15 00:53:52.432313] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 [2024-05-15 00:53:52.480389] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.321 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:27:49.321 "poll_groups": [ 00:27:49.321 { 00:27:49.321 "admin_qpairs": 2, 00:27:49.321 "completed_nvme_io": 68, 00:27:49.321 "current_admin_qpairs": 0, 00:27:49.321 "current_io_qpairs": 0, 00:27:49.321 "io_qpairs": 16, 00:27:49.321 "name": "nvmf_tgt_poll_group_000", 00:27:49.321 "pending_bdev_io": 0, 00:27:49.321 "transports": [ 00:27:49.321 { 00:27:49.321 "trtype": "TCP" 00:27:49.321 } 00:27:49.321 ] 00:27:49.321 }, 00:27:49.321 { 00:27:49.321 "admin_qpairs": 3, 00:27:49.321 "completed_nvme_io": 66, 00:27:49.321 "current_admin_qpairs": 0, 00:27:49.321 "current_io_qpairs": 0, 00:27:49.321 "io_qpairs": 17, 00:27:49.321 "name": "nvmf_tgt_poll_group_001", 00:27:49.321 "pending_bdev_io": 0, 00:27:49.321 "transports": [ 00:27:49.321 { 00:27:49.321 "trtype": "TCP" 00:27:49.321 } 00:27:49.321 ] 00:27:49.321 }, 00:27:49.321 { 00:27:49.321 "admin_qpairs": 1, 00:27:49.321 "completed_nvme_io": 218, 00:27:49.321 "current_admin_qpairs": 0, 00:27:49.321 "current_io_qpairs": 0, 00:27:49.321 "io_qpairs": 19, 00:27:49.321 "name": "nvmf_tgt_poll_group_002", 00:27:49.321 "pending_bdev_io": 0, 00:27:49.321 "transports": [ 00:27:49.321 { 00:27:49.321 "trtype": "TCP" 00:27:49.321 } 00:27:49.321 ] 00:27:49.321 }, 00:27:49.321 { 00:27:49.321 "admin_qpairs": 1, 00:27:49.321 "completed_nvme_io": 68, 00:27:49.321 "current_admin_qpairs": 0, 00:27:49.321 "current_io_qpairs": 0, 00:27:49.321 "io_qpairs": 18, 00:27:49.321 "name": "nvmf_tgt_poll_group_003", 00:27:49.321 "pending_bdev_io": 0, 00:27:49.322 "transports": [ 00:27:49.322 { 00:27:49.322 "trtype": "TCP" 00:27:49.322 } 00:27:49.322 ] 00:27:49.322 } 00:27:49.322 ], 00:27:49.322 "tick_rate": 2200000000 00:27:49.322 }' 00:27:49.322 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:27:49.322 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:27:49.322 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:27:49.322 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:27:49.322 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:27:49.322 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:27:49.322 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:27:49.322 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:27:49.322 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:49.580 rmmod nvme_tcp 00:27:49.580 rmmod nvme_fabrics 00:27:49.580 rmmod nvme_keyring 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 83183 ']' 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 83183 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' -z 83183 ']' 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # kill -0 83183 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # uname 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 83183 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:49.580 killing process with pid 83183 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 83183' 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # kill 83183 00:27:49.580 [2024-05-15 00:53:52.769319] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:49.580 00:53:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@971 -- # wait 83183 00:27:49.838 00:53:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:49.838 00:53:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:49.838 00:53:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:49.838 00:53:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:49.839 00:53:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:49.839 00:53:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.839 00:53:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.839 00:53:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.839 00:53:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:49.839 00:27:49.839 real 0m19.234s 00:27:49.839 user 1m12.328s 00:27:49.839 sys 0m2.287s 00:27:49.839 00:53:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:49.839 00:53:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:49.839 ************************************ 00:27:49.839 END TEST nvmf_rpc 00:27:49.839 ************************************ 00:27:49.839 00:53:53 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:27:49.839 00:53:53 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:49.839 00:53:53 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:49.839 00:53:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:49.839 ************************************ 00:27:49.839 START TEST nvmf_invalid 00:27:49.839 ************************************ 00:27:49.839 00:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:27:50.099 * Looking for test storage... 00:27:50.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:50.099 Cannot find device "nvmf_tgt_br" 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:50.099 Cannot find device "nvmf_tgt_br2" 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:50.099 Cannot find device "nvmf_tgt_br" 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:50.099 Cannot find device "nvmf_tgt_br2" 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:50.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:50.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:50.099 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:50.377 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:50.377 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:50.377 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:50.377 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:50.377 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:50.377 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:50.377 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:50.377 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:50.377 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:50.377 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:50.377 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:50.377 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:50.377 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:50.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:27:50.378 00:27:50.378 --- 10.0.0.2 ping statistics --- 00:27:50.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.378 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:50.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:50.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:27:50.378 00:27:50.378 --- 10.0.0.3 ping statistics --- 00:27:50.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.378 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:50.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:27:50.378 00:27:50.378 --- 10.0.0.1 ping statistics --- 00:27:50.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.378 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=83700 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 83700 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # '[' -z 83700 ']' 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:50.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:50.378 00:53:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:27:50.378 [2024-05-15 00:53:53.627233] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:27:50.378 [2024-05-15 00:53:53.627341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.635 [2024-05-15 00:53:53.765461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:50.635 [2024-05-15 00:53:53.889092] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.636 [2024-05-15 00:53:53.889190] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.636 [2024-05-15 00:53:53.889205] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.636 [2024-05-15 00:53:53.889216] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.636 [2024-05-15 00:53:53.889226] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.636 [2024-05-15 00:53:53.889359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.636 [2024-05-15 00:53:53.889504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:50.636 [2024-05-15 00:53:53.890066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:50.636 [2024-05-15 00:53:53.890132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.621 00:53:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:51.621 00:53:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@861 -- # return 0 00:27:51.621 00:53:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:51.621 00:53:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:51.621 00:53:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:27:51.621 00:53:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.621 00:53:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:51.621 00:53:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18437 00:27:51.879 [2024-05-15 00:53:55.058418] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:27:51.879 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/05/15 00:53:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18437 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:27:51.879 request: 00:27:51.879 { 00:27:51.879 "method": "nvmf_create_subsystem", 00:27:51.879 "params": { 00:27:51.879 "nqn": "nqn.2016-06.io.spdk:cnode18437", 00:27:51.879 "tgt_name": "foobar" 00:27:51.879 } 00:27:51.879 } 00:27:51.879 Got JSON-RPC error response 00:27:51.879 GoRPCClient: error on JSON-RPC call' 00:27:51.879 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/05/15 00:53:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18437 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:27:51.879 request: 00:27:51.879 { 00:27:51.879 "method": "nvmf_create_subsystem", 00:27:51.879 "params": { 00:27:51.879 "nqn": "nqn.2016-06.io.spdk:cnode18437", 00:27:51.879 "tgt_name": "foobar" 00:27:51.879 } 00:27:51.879 } 00:27:51.879 Got JSON-RPC error response 00:27:51.879 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:27:51.879 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:27:51.879 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7350 00:27:52.137 [2024-05-15 00:53:55.423029] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7350: invalid serial number 'SPDKISFASTANDAWESOME' 00:27:52.395 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/05/15 00:53:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7350 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:27:52.395 request: 00:27:52.395 { 00:27:52.395 "method": "nvmf_create_subsystem", 00:27:52.395 "params": { 00:27:52.395 "nqn": "nqn.2016-06.io.spdk:cnode7350", 00:27:52.395 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:27:52.395 } 00:27:52.395 } 00:27:52.395 Got JSON-RPC error response 00:27:52.395 GoRPCClient: error on JSON-RPC call' 00:27:52.395 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/05/15 00:53:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7350 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:27:52.395 request: 00:27:52.395 { 00:27:52.395 "method": "nvmf_create_subsystem", 00:27:52.395 "params": { 00:27:52.395 "nqn": "nqn.2016-06.io.spdk:cnode7350", 00:27:52.395 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:27:52.395 } 00:27:52.395 } 00:27:52.395 Got JSON-RPC error response 00:27:52.395 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:27:52.395 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:27:52.395 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10820 00:27:52.653 [2024-05-15 00:53:55.719482] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10820: invalid model number 'SPDK_Controller' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/05/15 00:53:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode10820], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:27:52.653 request: 00:27:52.653 { 00:27:52.653 "method": "nvmf_create_subsystem", 00:27:52.653 "params": { 00:27:52.653 "nqn": "nqn.2016-06.io.spdk:cnode10820", 00:27:52.653 "model_number": "SPDK_Controller\u001f" 00:27:52.653 } 00:27:52.653 } 00:27:52.653 Got JSON-RPC error response 00:27:52.653 GoRPCClient: error on JSON-RPC call' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/05/15 00:53:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode10820], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:27:52.653 request: 00:27:52.653 { 00:27:52.653 "method": "nvmf_create_subsystem", 00:27:52.653 "params": { 00:27:52.653 "nqn": "nqn.2016-06.io.spdk:cnode10820", 00:27:52.653 "model_number": "SPDK_Controller\u001f" 00:27:52.653 } 00:27:52.653 } 00:27:52.653 Got JSON-RPC error response 00:27:52.653 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.653 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.654 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:27:52.654 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:27:52.654 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:27:52.654 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.654 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.654 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:27:52.654 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ';oADj:Km&`#@4 Bq&a]1U' 00:27:52.654 00:53:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s ';oADj:Km&`#@4 Bq&a]1U' nqn.2016-06.io.spdk:cnode18129 00:27:52.912 [2024-05-15 00:53:56.144073] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18129: invalid serial number ';oADj:Km&`#@4 Bq&a]1U' 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/05/15 00:53:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18129 serial_number:;oADj:Km&`#@4 Bq&a]1U], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN ;oADj:Km&`#@4 Bq&a]1U 00:27:52.912 request: 00:27:52.912 { 00:27:52.912 "method": "nvmf_create_subsystem", 00:27:52.912 "params": { 00:27:52.912 "nqn": "nqn.2016-06.io.spdk:cnode18129", 00:27:52.912 "serial_number": ";oADj:Km&`#@4 Bq&a]1U" 00:27:52.912 } 00:27:52.912 } 00:27:52.912 Got JSON-RPC error response 00:27:52.912 GoRPCClient: error on JSON-RPC call' 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/05/15 00:53:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18129 serial_number:;oADj:Km&`#@4 Bq&a]1U], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN ;oADj:Km&`#@4 Bq&a]1U 00:27:52.912 request: 00:27:52.912 { 00:27:52.912 "method": "nvmf_create_subsystem", 00:27:52.912 "params": { 00:27:52.912 "nqn": "nqn.2016-06.io.spdk:cnode18129", 00:27:52.912 "serial_number": ";oADj:Km&`#@4 Bq&a]1U" 00:27:52.912 } 00:27:52.912 } 00:27:52.912 Got JSON-RPC error response 00:27:52.912 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:52.912 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:27:53.169 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:27:53.169 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:27:53.169 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.169 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ Z == \- ]] 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Zr>4'\''AiZ.7"2ZN-+}|4l+V6mNx>Utb9kXw:nOx3q/' 00:27:53.170 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Zr>4'\''AiZ.7"2ZN-+}|4l+V6mNx>Utb9kXw:nOx3q/' nqn.2016-06.io.spdk:cnode28516 00:27:53.428 [2024-05-15 00:53:56.652814] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28516: invalid model number 'Zr>4'AiZ.7"2ZN-+}|4l+V6mNx>Utb9kXw:nOx3q/' 00:27:53.428 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/05/15 00:53:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:Zr>4'\''AiZ.7"2ZN-+}|4l+V6mNx>Utb9kXw:nOx3q/ nqn:nqn.2016-06.io.spdk:cnode28516], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN Zr>4'\''AiZ.7"2ZN-+}|4l+V6mNx>Utb9kXw:nOx3q/ 00:27:53.428 request: 00:27:53.428 { 00:27:53.428 "method": "nvmf_create_subsystem", 00:27:53.428 "params": { 00:27:53.428 "nqn": "nqn.2016-06.io.spdk:cnode28516", 00:27:53.428 "model_number": "Zr>4'\''AiZ.7\"2ZN-+}|4l+V6mNx>Utb9kXw:nOx3q/" 00:27:53.428 } 00:27:53.428 } 00:27:53.428 Got JSON-RPC error response 00:27:53.428 GoRPCClient: error on JSON-RPC call' 00:27:53.428 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/05/15 00:53:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:Zr>4'AiZ.7"2ZN-+}|4l+V6mNx>Utb9kXw:nOx3q/ nqn:nqn.2016-06.io.spdk:cnode28516], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN Zr>4'AiZ.7"2ZN-+}|4l+V6mNx>Utb9kXw:nOx3q/ 00:27:53.428 request: 00:27:53.428 { 00:27:53.428 "method": "nvmf_create_subsystem", 00:27:53.428 "params": { 00:27:53.428 "nqn": "nqn.2016-06.io.spdk:cnode28516", 00:27:53.428 "model_number": "Zr>4'AiZ.7\"2ZN-+}|4l+V6mNx>Utb9kXw:nOx3q/" 00:27:53.428 } 00:27:53.428 } 00:27:53.428 Got JSON-RPC error response 00:27:53.428 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:27:53.428 00:53:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:27:53.994 [2024-05-15 00:53:57.057501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.994 00:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:27:54.252 00:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:27:54.252 00:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:27:54.252 00:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:27:54.252 00:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:27:54.252 00:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:27:54.511 [2024-05-15 00:53:57.616782] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:54.511 [2024-05-15 00:53:57.616985] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:27:54.511 00:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/05/15 00:53:57 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:27:54.511 request: 00:27:54.511 { 00:27:54.511 "method": "nvmf_subsystem_remove_listener", 00:27:54.511 "params": { 00:27:54.511 "nqn": "nqn.2016-06.io.spdk:cnode", 00:27:54.511 "listen_address": { 00:27:54.511 "trtype": "tcp", 00:27:54.511 "traddr": "", 00:27:54.511 "trsvcid": "4421" 00:27:54.511 } 00:27:54.511 } 00:27:54.511 } 00:27:54.511 Got JSON-RPC error response 00:27:54.511 GoRPCClient: error on JSON-RPC call' 00:27:54.511 00:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/05/15 00:53:57 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:27:54.511 request: 00:27:54.511 { 00:27:54.511 "method": "nvmf_subsystem_remove_listener", 00:27:54.511 "params": { 00:27:54.511 "nqn": "nqn.2016-06.io.spdk:cnode", 00:27:54.511 "listen_address": { 00:27:54.511 "trtype": "tcp", 00:27:54.511 "traddr": "", 00:27:54.511 "trsvcid": "4421" 00:27:54.511 } 00:27:54.511 } 00:27:54.511 } 00:27:54.511 Got JSON-RPC error response 00:27:54.511 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:27:54.511 00:53:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9801 -i 0 00:27:54.769 [2024-05-15 00:53:58.025397] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9801: invalid cntlid range [0-65519] 00:27:54.769 00:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/05/15 00:53:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode9801], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:27:54.769 request: 00:27:54.769 { 00:27:54.769 "method": "nvmf_create_subsystem", 00:27:54.769 "params": { 00:27:54.769 "nqn": "nqn.2016-06.io.spdk:cnode9801", 00:27:54.769 "min_cntlid": 0 00:27:54.769 } 00:27:54.769 } 00:27:54.769 Got JSON-RPC error response 00:27:54.769 GoRPCClient: error on JSON-RPC call' 00:27:54.769 00:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/05/15 00:53:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode9801], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:27:54.769 request: 00:27:54.769 { 00:27:54.769 "method": "nvmf_create_subsystem", 00:27:54.769 "params": { 00:27:54.769 "nqn": "nqn.2016-06.io.spdk:cnode9801", 00:27:54.769 "min_cntlid": 0 00:27:54.769 } 00:27:54.769 } 00:27:54.769 Got JSON-RPC error response 00:27:54.769 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:27:55.026 00:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29568 -i 65520 00:27:55.284 [2024-05-15 00:53:58.442048] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29568: invalid cntlid range [65520-65519] 00:27:55.284 00:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/05/15 00:53:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29568], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:27:55.284 request: 00:27:55.284 { 00:27:55.284 "method": "nvmf_create_subsystem", 00:27:55.284 "params": { 00:27:55.284 "nqn": "nqn.2016-06.io.spdk:cnode29568", 00:27:55.284 "min_cntlid": 65520 00:27:55.284 } 00:27:55.284 } 00:27:55.284 Got JSON-RPC error response 00:27:55.284 GoRPCClient: error on JSON-RPC call' 00:27:55.284 00:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/05/15 00:53:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29568], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:27:55.284 request: 00:27:55.284 { 00:27:55.284 "method": "nvmf_create_subsystem", 00:27:55.284 "params": { 00:27:55.284 "nqn": "nqn.2016-06.io.spdk:cnode29568", 00:27:55.284 "min_cntlid": 65520 00:27:55.284 } 00:27:55.284 } 00:27:55.284 Got JSON-RPC error response 00:27:55.284 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:27:55.284 00:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19837 -I 0 00:27:55.541 [2024-05-15 00:53:58.742386] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19837: invalid cntlid range [1-0] 00:27:55.541 00:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/05/15 00:53:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode19837], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:27:55.541 request: 00:27:55.541 { 00:27:55.541 "method": "nvmf_create_subsystem", 00:27:55.541 "params": { 00:27:55.541 "nqn": "nqn.2016-06.io.spdk:cnode19837", 00:27:55.541 "max_cntlid": 0 00:27:55.541 } 00:27:55.541 } 00:27:55.541 Got JSON-RPC error response 00:27:55.541 GoRPCClient: error on JSON-RPC call' 00:27:55.541 00:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/05/15 00:53:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode19837], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:27:55.541 request: 00:27:55.541 { 00:27:55.541 "method": "nvmf_create_subsystem", 00:27:55.541 "params": { 00:27:55.541 "nqn": "nqn.2016-06.io.spdk:cnode19837", 00:27:55.541 "max_cntlid": 0 00:27:55.541 } 00:27:55.541 } 00:27:55.541 Got JSON-RPC error response 00:27:55.541 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:27:55.541 00:53:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10710 -I 65520 00:27:55.800 [2024-05-15 00:53:58.990759] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10710: invalid cntlid range [1-65520] 00:27:55.800 00:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/05/15 00:53:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode10710], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:27:55.800 request: 00:27:55.800 { 00:27:55.800 "method": "nvmf_create_subsystem", 00:27:55.800 "params": { 00:27:55.800 "nqn": "nqn.2016-06.io.spdk:cnode10710", 00:27:55.800 "max_cntlid": 65520 00:27:55.800 } 00:27:55.800 } 00:27:55.800 Got JSON-RPC error response 00:27:55.800 GoRPCClient: error on JSON-RPC call' 00:27:55.800 00:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/05/15 00:53:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode10710], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:27:55.800 request: 00:27:55.800 { 00:27:55.800 "method": "nvmf_create_subsystem", 00:27:55.800 "params": { 00:27:55.800 "nqn": "nqn.2016-06.io.spdk:cnode10710", 00:27:55.800 "max_cntlid": 65520 00:27:55.800 } 00:27:55.800 } 00:27:55.800 Got JSON-RPC error response 00:27:55.800 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:27:55.800 00:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16828 -i 6 -I 5 00:27:56.058 [2024-05-15 00:53:59.231190] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16828: invalid cntlid range [6-5] 00:27:56.058 00:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/05/15 00:53:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode16828], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:27:56.058 request: 00:27:56.058 { 00:27:56.058 "method": "nvmf_create_subsystem", 00:27:56.058 "params": { 00:27:56.058 "nqn": "nqn.2016-06.io.spdk:cnode16828", 00:27:56.058 "min_cntlid": 6, 00:27:56.058 "max_cntlid": 5 00:27:56.058 } 00:27:56.058 } 00:27:56.058 Got JSON-RPC error response 00:27:56.058 GoRPCClient: error on JSON-RPC call' 00:27:56.059 00:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/05/15 00:53:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode16828], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:27:56.059 request: 00:27:56.059 { 00:27:56.059 "method": "nvmf_create_subsystem", 00:27:56.059 "params": { 00:27:56.059 "nqn": "nqn.2016-06.io.spdk:cnode16828", 00:27:56.059 "min_cntlid": 6, 00:27:56.059 "max_cntlid": 5 00:27:56.059 } 00:27:56.059 } 00:27:56.059 Got JSON-RPC error response 00:27:56.059 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:27:56.059 00:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:27:56.317 00:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:27:56.317 { 00:27:56.317 "name": "foobar", 00:27:56.317 "method": "nvmf_delete_target", 00:27:56.317 "req_id": 1 00:27:56.317 } 00:27:56.317 Got JSON-RPC error response 00:27:56.317 response: 00:27:56.317 { 00:27:56.317 "code": -32602, 00:27:56.318 "message": "The specified target doesn'\''t exist, cannot delete it." 00:27:56.318 }' 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:27:56.318 { 00:27:56.318 "name": "foobar", 00:27:56.318 "method": "nvmf_delete_target", 00:27:56.318 "req_id": 1 00:27:56.318 } 00:27:56.318 Got JSON-RPC error response 00:27:56.318 response: 00:27:56.318 { 00:27:56.318 "code": -32602, 00:27:56.318 "message": "The specified target doesn't exist, cannot delete it." 00:27:56.318 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:56.318 rmmod nvme_tcp 00:27:56.318 rmmod nvme_fabrics 00:27:56.318 rmmod nvme_keyring 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 83700 ']' 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 83700 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@947 -- # '[' -z 83700 ']' 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # kill -0 83700 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # uname 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 83700 00:27:56.318 killing process with pid 83700 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # echo 'killing process with pid 83700' 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # kill 83700 00:27:56.318 [2024-05-15 00:53:59.500728] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:56.318 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@971 -- # wait 83700 00:27:56.576 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:56.576 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:56.576 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:56.576 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:56.576 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:56.576 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.576 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.576 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.576 00:53:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:56.576 00:27:56.576 real 0m6.723s 00:27:56.576 user 0m27.430s 00:27:56.576 sys 0m1.412s 00:27:56.576 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:56.576 00:53:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:27:56.576 ************************************ 00:27:56.576 END TEST nvmf_invalid 00:27:56.576 ************************************ 00:27:56.836 00:53:59 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:27:56.836 00:53:59 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:56.836 00:53:59 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:56.836 00:53:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:56.836 ************************************ 00:27:56.836 START TEST nvmf_abort 00:27:56.836 ************************************ 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:27:56.836 * Looking for test storage... 00:27:56.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.836 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:56.837 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:56.837 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:56.837 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:56.837 00:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:56.837 Cannot find device "nvmf_tgt_br" 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:56.837 Cannot find device "nvmf_tgt_br2" 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:56.837 Cannot find device "nvmf_tgt_br" 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:56.837 Cannot find device "nvmf_tgt_br2" 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:56.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:56.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:27:56.837 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:57.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:27:57.096 00:27:57.096 --- 10.0.0.2 ping statistics --- 00:27:57.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.096 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:57.096 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:57.096 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:27:57.096 00:27:57.096 --- 10.0.0.3 ping statistics --- 00:27:57.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.096 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:57.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:27:57.096 00:27:57.096 --- 10.0.0.1 ping statistics --- 00:27:57.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.096 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=84212 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 84212 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # '[' -z 84212 ']' 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:57.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:57.096 00:54:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:57.096 [2024-05-15 00:54:00.376137] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:27:57.096 [2024-05-15 00:54:00.376221] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.354 [2024-05-15 00:54:00.519927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:57.354 [2024-05-15 00:54:00.620771] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.354 [2024-05-15 00:54:00.620828] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.354 [2024-05-15 00:54:00.620840] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:57.354 [2024-05-15 00:54:00.620848] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:57.354 [2024-05-15 00:54:00.620856] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.354 [2024-05-15 00:54:00.621117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.354 [2024-05-15 00:54:00.621446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:57.354 [2024-05-15 00:54:00.622926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@861 -- # return 0 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:58.290 [2024-05-15 00:54:01.447926] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:58.290 Malloc0 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:58.290 Delay0 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:58.290 [2024-05-15 00:54:01.529034] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:58.290 [2024-05-15 00:54:01.529318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.290 00:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:58.549 [2024-05-15 00:54:01.709214] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:01.081 Initializing NVMe Controllers 00:28:01.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:01.081 controller IO queue size 128 less than required 00:28:01.082 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:01.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:01.082 Initialization complete. Launching workers. 00:28:01.082 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32087 00:28:01.082 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32148, failed to submit 62 00:28:01.082 success 32091, unsuccess 57, failed 0 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:01.082 rmmod nvme_tcp 00:28:01.082 rmmod nvme_fabrics 00:28:01.082 rmmod nvme_keyring 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 84212 ']' 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 84212 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' -z 84212 ']' 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # kill -0 84212 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # uname 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84212 00:28:01.082 killing process with pid 84212 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84212' 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # kill 84212 00:28:01.082 [2024-05-15 00:54:03.868330] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:01.082 00:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@971 -- # wait 84212 00:28:01.082 00:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:01.082 00:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:01.082 00:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:01.082 00:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:01.082 00:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:01.082 00:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.082 00:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.082 00:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.082 00:54:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:01.082 00:28:01.082 real 0m4.258s 00:28:01.082 user 0m12.247s 00:28:01.082 sys 0m1.063s 00:28:01.082 00:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:01.082 ************************************ 00:28:01.082 END TEST nvmf_abort 00:28:01.082 ************************************ 00:28:01.082 00:54:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:01.082 00:54:04 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:28:01.082 00:54:04 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:28:01.082 00:54:04 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:01.082 00:54:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:01.082 ************************************ 00:28:01.082 START TEST nvmf_ns_hotplug_stress 00:28:01.082 ************************************ 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:28:01.082 * Looking for test storage... 00:28:01.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:01.082 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:01.083 Cannot find device "nvmf_tgt_br" 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:01.083 Cannot find device "nvmf_tgt_br2" 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:01.083 Cannot find device "nvmf_tgt_br" 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:01.083 Cannot find device "nvmf_tgt_br2" 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:28:01.083 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:01.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:01.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:01.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:28:01.342 00:28:01.342 --- 10.0.0.2 ping statistics --- 00:28:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.342 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:01.342 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:01.342 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:28:01.342 00:28:01.342 --- 10.0.0.3 ping statistics --- 00:28:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.342 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:01.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:28:01.342 00:28:01.342 --- 10.0.0.1 ping statistics --- 00:28:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.342 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:01.342 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:01.601 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:01.601 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:01.601 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:01.601 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:01.601 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=84473 00:28:01.601 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 84473 00:28:01.601 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # '[' -z 84473 ']' 00:28:01.601 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:01.601 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.601 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:01.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.602 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.602 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:01.602 00:54:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:01.602 [2024-05-15 00:54:04.697459] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:28:01.602 [2024-05-15 00:54:04.697559] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.602 [2024-05-15 00:54:04.833998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:01.861 [2024-05-15 00:54:04.931276] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.861 [2024-05-15 00:54:04.931710] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.861 [2024-05-15 00:54:04.931956] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.861 [2024-05-15 00:54:04.932260] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.861 [2024-05-15 00:54:04.932480] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.861 [2024-05-15 00:54:04.932910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.861 [2024-05-15 00:54:04.932822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.861 [2024-05-15 00:54:04.932903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.861 00:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:01.861 00:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@861 -- # return 0 00:28:01.861 00:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:01.861 00:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:01.861 00:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:01.861 00:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.861 00:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:01.861 00:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:02.120 [2024-05-15 00:54:05.342508] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.120 00:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:02.379 00:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.637 [2024-05-15 00:54:05.806766] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:02.637 [2024-05-15 00:54:05.807720] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.637 00:54:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:02.896 00:54:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:03.155 Malloc0 00:28:03.155 00:54:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:03.414 Delay0 00:28:03.414 00:54:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.673 00:54:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:03.932 NULL1 00:28:03.932 00:54:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:04.191 00:54:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=84595 00:28:04.191 00:54:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:04.191 00:54:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:04.191 00:54:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.595 Read completed with error (sct=0, sc=11) 00:28:05.595 00:54:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.595 00:54:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:05.595 00:54:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:05.853 true 00:28:05.853 00:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:05.853 00:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.790 00:54:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.049 00:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:07.049 00:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:07.307 true 00:28:07.307 00:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:07.307 00:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.567 00:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.826 00:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:07.826 00:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:08.084 true 00:28:08.084 00:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:08.084 00:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.342 00:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.601 00:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:08.601 00:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:08.859 true 00:28:08.859 00:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:08.859 00:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.799 00:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.057 00:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:10.057 00:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:10.057 true 00:28:10.316 00:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:10.316 00:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.574 00:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.832 00:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:10.832 00:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:10.833 true 00:28:10.833 00:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:10.833 00:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.399 00:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.658 00:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:11.658 00:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:11.916 true 00:28:11.916 00:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:11.916 00:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.851 00:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.851 00:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:12.851 00:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:13.110 true 00:28:13.110 00:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:13.110 00:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.368 00:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.627 00:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:13.627 00:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:13.885 true 00:28:13.885 00:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:13.886 00:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.819 00:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.077 00:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:15.077 00:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:15.336 true 00:28:15.336 00:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:15.336 00:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.594 00:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.853 00:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:15.853 00:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:16.111 true 00:28:16.111 00:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:16.111 00:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.369 00:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.627 00:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:16.627 00:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:16.884 true 00:28:16.884 00:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:16.885 00:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.817 00:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.074 00:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:18.074 00:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:18.332 true 00:28:18.332 00:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:18.332 00:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.592 00:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.851 00:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:18.851 00:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:19.110 true 00:28:19.110 00:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:19.110 00:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.427 00:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:19.731 00:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:19.731 00:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:19.989 true 00:28:19.989 00:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:19.989 00:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.925 00:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.925 00:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:20.925 00:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:21.183 true 00:28:21.183 00:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:21.183 00:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.441 00:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:21.698 00:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:21.698 00:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:21.956 true 00:28:21.956 00:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:21.956 00:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.891 00:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:23.149 00:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:23.149 00:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:23.149 true 00:28:23.407 00:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:23.408 00:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.666 00:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:23.933 00:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:23.933 00:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:24.191 true 00:28:24.191 00:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:24.191 00:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.449 00:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.708 00:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:24.708 00:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:24.966 true 00:28:24.966 00:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:24.966 00:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.901 00:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.901 00:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:25.901 00:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:26.159 true 00:28:26.159 00:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:26.159 00:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.418 00:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.677 00:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:26.677 00:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:26.937 true 00:28:26.937 00:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:26.937 00:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.196 00:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.454 00:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:27.454 00:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:27.712 true 00:28:27.712 00:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:27.712 00:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.084 00:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:29.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:29.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:29.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:29.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:29.084 00:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:29.084 00:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:29.343 true 00:28:29.343 00:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:29.343 00:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.276 00:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.276 00:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:30.276 00:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:30.533 true 00:28:30.533 00:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:30.533 00:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.792 00:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.049 00:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:31.049 00:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:31.308 true 00:28:31.308 00:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:31.308 00:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.240 00:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.499 00:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:32.499 00:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:32.787 true 00:28:32.787 00:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:32.787 00:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.044 00:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.301 00:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:33.301 00:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:33.560 true 00:28:33.560 00:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:33.560 00:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.819 00:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.078 00:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:28:34.078 00:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:28:34.337 true 00:28:34.337 00:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:34.337 00:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.337 Initializing NVMe Controllers 00:28:34.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:34.337 Controller IO queue size 128, less than required. 00:28:34.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.337 Controller IO queue size 128, less than required. 00:28:34.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:34.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:34.337 Initialization complete. Launching workers. 00:28:34.337 ======================================================== 00:28:34.337 Latency(us) 00:28:34.337 Device Information : IOPS MiB/s Average min max 00:28:34.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 474.64 0.23 119746.73 2849.75 1129525.09 00:28:34.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8816.93 4.31 14516.99 3373.22 637904.74 00:28:34.337 ======================================================== 00:28:34.337 Total : 9291.57 4.54 19892.45 2849.75 1129525.09 00:28:34.337 00:28:34.594 00:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.854 00:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:28:34.854 00:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:28:35.113 true 00:28:35.113 00:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84595 00:28:35.113 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (84595) - No such process 00:28:35.113 00:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 84595 00:28:35.113 00:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.371 00:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:35.630 00:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:35.630 00:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:35.630 00:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:35.630 00:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:35.630 00:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:35.630 null0 00:28:35.630 00:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:35.630 00:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:35.630 00:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:35.888 null1 00:28:35.888 00:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:35.888 00:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:35.888 00:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:36.147 null2 00:28:36.147 00:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:36.147 00:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:36.147 00:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:36.405 null3 00:28:36.405 00:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:36.405 00:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:36.405 00:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:36.664 null4 00:28:36.664 00:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:36.664 00:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:36.664 00:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:36.922 null5 00:28:36.922 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:36.922 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:36.922 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:37.179 null6 00:28:37.179 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:37.179 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:37.179 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:37.451 null7 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.451 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 85638 85639 85642 85643 85645 85647 85649 85651 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.452 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:37.722 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.722 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:37.722 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:37.722 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:37.722 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:37.722 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:37.722 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:37.722 00:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:37.981 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:38.240 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:38.240 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:38.240 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:38.240 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.240 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:38.240 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:38.240 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:38.499 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:38.759 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:38.759 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:38.759 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:38.759 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:38.759 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:38.759 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:38.759 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:38.759 00:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:38.759 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.019 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:39.278 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.278 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.278 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:39.278 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.278 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.278 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:39.278 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:39.278 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:39.278 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:39.278 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:39.278 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.536 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.537 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:39.796 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.796 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.796 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:39.796 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.796 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.796 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:39.796 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:39.796 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:39.796 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:39.796 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:39.796 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:39.796 00:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:39.796 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:39.796 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:40.055 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:40.055 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:40.055 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.055 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.055 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:40.055 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:40.055 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.055 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.055 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:40.055 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.055 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.055 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.313 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:40.572 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:40.572 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:40.572 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:40.572 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:40.572 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.572 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.572 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:40.572 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.572 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.572 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:40.572 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.572 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.572 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:40.830 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.830 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.830 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:40.830 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:40.830 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.830 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.830 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:40.830 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.830 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.830 00:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:40.830 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:40.830 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:40.830 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:40.830 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:40.830 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:40.830 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.089 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:41.348 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:41.607 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:41.607 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.607 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.607 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:41.607 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:41.607 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:41.607 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.607 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.607 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.607 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:41.607 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:41.866 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:41.866 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.866 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.866 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:41.866 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.866 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.866 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:41.866 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.866 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.866 00:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:41.866 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:41.866 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.866 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.866 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:41.866 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.866 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.866 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:41.866 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.866 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.866 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:42.124 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.124 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.124 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:42.124 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:42.124 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:42.124 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:42.124 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.124 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.124 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:42.124 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:42.124 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:42.124 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.384 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:42.643 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.643 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.643 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:42.643 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:42.643 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.643 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.643 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:42.643 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:42.643 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:42.643 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.643 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.643 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:42.643 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.901 00:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:42.901 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:42.901 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.901 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.901 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.901 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.901 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.901 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.901 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.901 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.901 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.901 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:43.160 rmmod nvme_tcp 00:28:43.160 rmmod nvme_fabrics 00:28:43.160 rmmod nvme_keyring 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 84473 ']' 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 84473 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' -z 84473 ']' 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # kill -0 84473 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # uname 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84473 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:43.160 killing process with pid 84473 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84473' 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # kill 84473 00:28:43.160 [2024-05-15 00:54:46.386027] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:43.160 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # wait 84473 00:28:43.418 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:43.418 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:43.418 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:43.418 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:43.418 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:43.418 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.418 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:43.418 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.418 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:43.418 00:28:43.418 real 0m42.457s 00:28:43.418 user 3m26.306s 00:28:43.418 sys 0m13.260s 00:28:43.418 ************************************ 00:28:43.418 END TEST nvmf_ns_hotplug_stress 00:28:43.418 ************************************ 00:28:43.418 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:43.418 00:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:43.418 00:54:46 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:28:43.418 00:54:46 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:28:43.418 00:54:46 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:43.418 00:54:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.418 ************************************ 00:28:43.418 START TEST nvmf_connect_stress 00:28:43.418 ************************************ 00:28:43.418 00:54:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:28:43.678 * Looking for test storage... 00:28:43.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.678 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:43.679 Cannot find device "nvmf_tgt_br" 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:43.679 Cannot find device "nvmf_tgt_br2" 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:43.679 Cannot find device "nvmf_tgt_br" 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:43.679 Cannot find device "nvmf_tgt_br2" 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:43.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:43.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:43.679 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:43.938 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:43.938 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:43.938 00:54:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:43.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:28:43.938 00:28:43.938 --- 10.0.0.2 ping statistics --- 00:28:43.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.938 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:43.938 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:43.938 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:28:43.938 00:28:43.938 --- 10.0.0.3 ping statistics --- 00:28:43.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.938 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:43.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:28:43.938 00:28:43.938 --- 10.0.0.1 ping statistics --- 00:28:43.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.938 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=86967 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 86967 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # '[' -z 86967 ']' 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:43.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:43.938 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:43.938 [2024-05-15 00:54:47.220829] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:28:43.938 [2024-05-15 00:54:47.221001] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.197 [2024-05-15 00:54:47.364854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:44.197 [2024-05-15 00:54:47.452585] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.197 [2024-05-15 00:54:47.452663] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.197 [2024-05-15 00:54:47.452693] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:44.197 [2024-05-15 00:54:47.452704] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:44.197 [2024-05-15 00:54:47.452713] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.197 [2024-05-15 00:54:47.454665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.197 [2024-05-15 00:54:47.454789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.197 [2024-05-15 00:54:47.454808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@861 -- # return 0 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:44.456 [2024-05-15 00:54:47.632409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:44.456 [2024-05-15 00:54:47.652358] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:44.456 [2024-05-15 00:54:47.652586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:44.456 NULL1 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=87000 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.456 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.457 00:54:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:45.023 00:54:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.023 00:54:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:45.023 00:54:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:45.023 00:54:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.023 00:54:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:45.281 00:54:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.281 00:54:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:45.281 00:54:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:45.281 00:54:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.281 00:54:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:45.551 00:54:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.551 00:54:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:45.551 00:54:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:45.551 00:54:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.551 00:54:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:45.823 00:54:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.823 00:54:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:45.823 00:54:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:45.823 00:54:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.823 00:54:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:46.080 00:54:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.080 00:54:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:46.080 00:54:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:46.080 00:54:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.080 00:54:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:46.645 00:54:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.645 00:54:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:46.645 00:54:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:46.645 00:54:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.645 00:54:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:46.903 00:54:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.903 00:54:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:46.903 00:54:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:46.903 00:54:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.903 00:54:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:47.160 00:54:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:47.160 00:54:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:47.160 00:54:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:47.160 00:54:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:47.160 00:54:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:47.418 00:54:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:47.418 00:54:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:47.418 00:54:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:47.418 00:54:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:47.418 00:54:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:47.676 00:54:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:47.676 00:54:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:47.676 00:54:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:47.676 00:54:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:47.676 00:54:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:48.242 00:54:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.242 00:54:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:48.242 00:54:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:48.242 00:54:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.242 00:54:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:48.500 00:54:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.500 00:54:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:48.500 00:54:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:48.500 00:54:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.500 00:54:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:48.758 00:54:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.758 00:54:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:48.758 00:54:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:48.759 00:54:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.759 00:54:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:49.017 00:54:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.017 00:54:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:49.017 00:54:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:49.017 00:54:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.017 00:54:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:49.652 00:54:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.652 00:54:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:49.652 00:54:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:49.652 00:54:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.652 00:54:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:49.652 00:54:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.652 00:54:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:49.652 00:54:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:49.652 00:54:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.652 00:54:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:50.219 00:54:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.219 00:54:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:50.219 00:54:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:50.219 00:54:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.219 00:54:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:50.477 00:54:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.477 00:54:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:50.477 00:54:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:50.477 00:54:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.477 00:54:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:50.747 00:54:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.747 00:54:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:50.747 00:54:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:50.747 00:54:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.747 00:54:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:51.005 00:54:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.005 00:54:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:51.005 00:54:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:51.005 00:54:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.005 00:54:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:51.264 00:54:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.264 00:54:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:51.264 00:54:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:51.264 00:54:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.264 00:54:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:51.830 00:54:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.830 00:54:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:51.830 00:54:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:51.830 00:54:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.830 00:54:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:52.090 00:54:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.090 00:54:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:52.090 00:54:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:52.090 00:54:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.090 00:54:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:52.349 00:54:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.349 00:54:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:52.349 00:54:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:52.349 00:54:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.349 00:54:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:52.607 00:54:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.607 00:54:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:52.607 00:54:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:52.607 00:54:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.607 00:54:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:52.866 00:54:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.866 00:54:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:52.866 00:54:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:52.866 00:54:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.866 00:54:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:53.466 00:54:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.466 00:54:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:53.466 00:54:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:53.466 00:54:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.466 00:54:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:53.740 00:54:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.740 00:54:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:53.740 00:54:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:53.740 00:54:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.740 00:54:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:53.999 00:54:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.000 00:54:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:54.000 00:54:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:54.000 00:54:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.000 00:54:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:54.258 00:54:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.258 00:54:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:54.258 00:54:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:54.258 00:54:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.258 00:54:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:54.515 00:54:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.515 00:54:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:54.515 00:54:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:28:54.515 00:54:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.516 00:54:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:54.773 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.773 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.773 00:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87000 00:28:54.773 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (87000) - No such process 00:28:54.773 00:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 87000 00:28:54.773 00:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:55.032 rmmod nvme_tcp 00:28:55.032 rmmod nvme_fabrics 00:28:55.032 rmmod nvme_keyring 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 86967 ']' 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 86967 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' -z 86967 ']' 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # kill -0 86967 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # uname 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 86967 00:28:55.032 killing process with pid 86967 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 86967' 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # kill 86967 00:28:55.032 [2024-05-15 00:54:58.192555] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:55.032 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@971 -- # wait 86967 00:28:55.291 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:55.291 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:55.291 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:55.291 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:55.291 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:55.291 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.291 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:55.291 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.291 00:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:55.291 00:28:55.291 real 0m11.754s 00:28:55.291 user 0m39.138s 00:28:55.291 sys 0m3.346s 00:28:55.291 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:55.291 00:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:28:55.291 ************************************ 00:28:55.291 END TEST nvmf_connect_stress 00:28:55.291 ************************************ 00:28:55.291 00:54:58 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:28:55.291 00:54:58 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:28:55.291 00:54:58 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:55.291 00:54:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:55.291 ************************************ 00:28:55.291 START TEST nvmf_fused_ordering 00:28:55.291 ************************************ 00:28:55.291 00:54:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:28:55.550 * Looking for test storage... 00:28:55.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:55.550 Cannot find device "nvmf_tgt_br" 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:55.550 Cannot find device "nvmf_tgt_br2" 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:55.550 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:55.551 Cannot find device "nvmf_tgt_br" 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:55.551 Cannot find device "nvmf_tgt_br2" 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:55.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:55.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:55.551 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:55.809 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:55.809 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:55.809 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:55.809 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:55.809 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:55.809 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:55.809 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:55.809 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:55.809 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:55.809 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:55.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:28:55.809 00:28:55.809 --- 10.0.0.2 ping statistics --- 00:28:55.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.809 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:28:55.809 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:55.809 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:55.809 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:28:55.809 00:28:55.809 --- 10.0.0.3 ping statistics --- 00:28:55.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.810 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:55.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:28:55.810 00:28:55.810 --- 10.0.0.1 ping statistics --- 00:28:55.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.810 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:28:55.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=87323 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 87323 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # '[' -z 87323 ']' 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:55.810 00:54:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:28:55.810 [2024-05-15 00:54:59.001633] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:28:55.810 [2024-05-15 00:54:59.001861] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.068 [2024-05-15 00:54:59.136365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.068 [2024-05-15 00:54:59.231817] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.068 [2024-05-15 00:54:59.231875] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.068 [2024-05-15 00:54:59.231903] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.068 [2024-05-15 00:54:59.231912] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.068 [2024-05-15 00:54:59.231919] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.068 [2024-05-15 00:54:59.231943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.004 00:54:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:57.004 00:54:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@861 -- # return 0 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:28:57.004 [2024-05-15 00:55:00.051794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.004 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:28:57.004 [2024-05-15 00:55:00.071720] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:57.005 [2024-05-15 00:55:00.071934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:28:57.005 NULL1 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.005 00:55:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:57.005 [2024-05-15 00:55:00.139690] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:28:57.005 [2024-05-15 00:55:00.139732] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87377 ] 00:28:57.571 Attached to nqn.2016-06.io.spdk:cnode1 00:28:57.571 Namespace ID: 1 size: 1GB 00:28:57.571 fused_ordering(0) 00:28:57.571 fused_ordering(1) 00:28:57.571 fused_ordering(2) 00:28:57.571 fused_ordering(3) 00:28:57.571 fused_ordering(4) 00:28:57.571 fused_ordering(5) 00:28:57.571 fused_ordering(6) 00:28:57.571 fused_ordering(7) 00:28:57.571 fused_ordering(8) 00:28:57.571 fused_ordering(9) 00:28:57.571 fused_ordering(10) 00:28:57.571 fused_ordering(11) 00:28:57.571 fused_ordering(12) 00:28:57.571 fused_ordering(13) 00:28:57.571 fused_ordering(14) 00:28:57.571 fused_ordering(15) 00:28:57.571 fused_ordering(16) 00:28:57.571 fused_ordering(17) 00:28:57.571 fused_ordering(18) 00:28:57.571 fused_ordering(19) 00:28:57.571 fused_ordering(20) 00:28:57.571 fused_ordering(21) 00:28:57.571 fused_ordering(22) 00:28:57.571 fused_ordering(23) 00:28:57.571 fused_ordering(24) 00:28:57.571 fused_ordering(25) 00:28:57.571 fused_ordering(26) 00:28:57.571 fused_ordering(27) 00:28:57.571 fused_ordering(28) 00:28:57.571 fused_ordering(29) 00:28:57.571 fused_ordering(30) 00:28:57.571 fused_ordering(31) 00:28:57.571 fused_ordering(32) 00:28:57.571 fused_ordering(33) 00:28:57.571 fused_ordering(34) 00:28:57.571 fused_ordering(35) 00:28:57.571 fused_ordering(36) 00:28:57.571 fused_ordering(37) 00:28:57.571 fused_ordering(38) 00:28:57.571 fused_ordering(39) 00:28:57.571 fused_ordering(40) 00:28:57.571 fused_ordering(41) 00:28:57.571 fused_ordering(42) 00:28:57.571 fused_ordering(43) 00:28:57.571 fused_ordering(44) 00:28:57.571 fused_ordering(45) 00:28:57.571 fused_ordering(46) 00:28:57.571 fused_ordering(47) 00:28:57.571 fused_ordering(48) 00:28:57.571 fused_ordering(49) 00:28:57.571 fused_ordering(50) 00:28:57.571 fused_ordering(51) 00:28:57.571 fused_ordering(52) 00:28:57.571 fused_ordering(53) 00:28:57.571 fused_ordering(54) 00:28:57.571 fused_ordering(55) 00:28:57.571 fused_ordering(56) 00:28:57.571 fused_ordering(57) 00:28:57.571 fused_ordering(58) 00:28:57.571 fused_ordering(59) 00:28:57.571 fused_ordering(60) 00:28:57.571 fused_ordering(61) 00:28:57.571 fused_ordering(62) 00:28:57.571 fused_ordering(63) 00:28:57.571 fused_ordering(64) 00:28:57.571 fused_ordering(65) 00:28:57.571 fused_ordering(66) 00:28:57.571 fused_ordering(67) 00:28:57.571 fused_ordering(68) 00:28:57.571 fused_ordering(69) 00:28:57.571 fused_ordering(70) 00:28:57.571 fused_ordering(71) 00:28:57.571 fused_ordering(72) 00:28:57.571 fused_ordering(73) 00:28:57.571 fused_ordering(74) 00:28:57.571 fused_ordering(75) 00:28:57.571 fused_ordering(76) 00:28:57.571 fused_ordering(77) 00:28:57.571 fused_ordering(78) 00:28:57.571 fused_ordering(79) 00:28:57.571 fused_ordering(80) 00:28:57.571 fused_ordering(81) 00:28:57.571 fused_ordering(82) 00:28:57.571 fused_ordering(83) 00:28:57.571 fused_ordering(84) 00:28:57.571 fused_ordering(85) 00:28:57.571 fused_ordering(86) 00:28:57.571 fused_ordering(87) 00:28:57.571 fused_ordering(88) 00:28:57.571 fused_ordering(89) 00:28:57.571 fused_ordering(90) 00:28:57.571 fused_ordering(91) 00:28:57.571 fused_ordering(92) 00:28:57.571 fused_ordering(93) 00:28:57.571 fused_ordering(94) 00:28:57.571 fused_ordering(95) 00:28:57.571 fused_ordering(96) 00:28:57.571 fused_ordering(97) 00:28:57.571 fused_ordering(98) 00:28:57.571 fused_ordering(99) 00:28:57.571 fused_ordering(100) 00:28:57.571 fused_ordering(101) 00:28:57.571 fused_ordering(102) 00:28:57.571 fused_ordering(103) 00:28:57.571 fused_ordering(104) 00:28:57.571 fused_ordering(105) 00:28:57.571 fused_ordering(106) 00:28:57.571 fused_ordering(107) 00:28:57.571 fused_ordering(108) 00:28:57.571 fused_ordering(109) 00:28:57.571 fused_ordering(110) 00:28:57.571 fused_ordering(111) 00:28:57.571 fused_ordering(112) 00:28:57.571 fused_ordering(113) 00:28:57.571 fused_ordering(114) 00:28:57.571 fused_ordering(115) 00:28:57.571 fused_ordering(116) 00:28:57.571 fused_ordering(117) 00:28:57.571 fused_ordering(118) 00:28:57.571 fused_ordering(119) 00:28:57.571 fused_ordering(120) 00:28:57.571 fused_ordering(121) 00:28:57.571 fused_ordering(122) 00:28:57.571 fused_ordering(123) 00:28:57.571 fused_ordering(124) 00:28:57.571 fused_ordering(125) 00:28:57.571 fused_ordering(126) 00:28:57.571 fused_ordering(127) 00:28:57.571 fused_ordering(128) 00:28:57.571 fused_ordering(129) 00:28:57.571 fused_ordering(130) 00:28:57.571 fused_ordering(131) 00:28:57.571 fused_ordering(132) 00:28:57.571 fused_ordering(133) 00:28:57.571 fused_ordering(134) 00:28:57.571 fused_ordering(135) 00:28:57.571 fused_ordering(136) 00:28:57.571 fused_ordering(137) 00:28:57.571 fused_ordering(138) 00:28:57.571 fused_ordering(139) 00:28:57.571 fused_ordering(140) 00:28:57.571 fused_ordering(141) 00:28:57.571 fused_ordering(142) 00:28:57.571 fused_ordering(143) 00:28:57.571 fused_ordering(144) 00:28:57.571 fused_ordering(145) 00:28:57.571 fused_ordering(146) 00:28:57.571 fused_ordering(147) 00:28:57.571 fused_ordering(148) 00:28:57.571 fused_ordering(149) 00:28:57.571 fused_ordering(150) 00:28:57.571 fused_ordering(151) 00:28:57.571 fused_ordering(152) 00:28:57.571 fused_ordering(153) 00:28:57.571 fused_ordering(154) 00:28:57.571 fused_ordering(155) 00:28:57.571 fused_ordering(156) 00:28:57.571 fused_ordering(157) 00:28:57.571 fused_ordering(158) 00:28:57.571 fused_ordering(159) 00:28:57.571 fused_ordering(160) 00:28:57.571 fused_ordering(161) 00:28:57.572 fused_ordering(162) 00:28:57.572 fused_ordering(163) 00:28:57.572 fused_ordering(164) 00:28:57.572 fused_ordering(165) 00:28:57.572 fused_ordering(166) 00:28:57.572 fused_ordering(167) 00:28:57.572 fused_ordering(168) 00:28:57.572 fused_ordering(169) 00:28:57.572 fused_ordering(170) 00:28:57.572 fused_ordering(171) 00:28:57.572 fused_ordering(172) 00:28:57.572 fused_ordering(173) 00:28:57.572 fused_ordering(174) 00:28:57.572 fused_ordering(175) 00:28:57.572 fused_ordering(176) 00:28:57.572 fused_ordering(177) 00:28:57.572 fused_ordering(178) 00:28:57.572 fused_ordering(179) 00:28:57.572 fused_ordering(180) 00:28:57.572 fused_ordering(181) 00:28:57.572 fused_ordering(182) 00:28:57.572 fused_ordering(183) 00:28:57.572 fused_ordering(184) 00:28:57.572 fused_ordering(185) 00:28:57.572 fused_ordering(186) 00:28:57.572 fused_ordering(187) 00:28:57.572 fused_ordering(188) 00:28:57.572 fused_ordering(189) 00:28:57.572 fused_ordering(190) 00:28:57.572 fused_ordering(191) 00:28:57.572 fused_ordering(192) 00:28:57.572 fused_ordering(193) 00:28:57.572 fused_ordering(194) 00:28:57.572 fused_ordering(195) 00:28:57.572 fused_ordering(196) 00:28:57.572 fused_ordering(197) 00:28:57.572 fused_ordering(198) 00:28:57.572 fused_ordering(199) 00:28:57.572 fused_ordering(200) 00:28:57.572 fused_ordering(201) 00:28:57.572 fused_ordering(202) 00:28:57.572 fused_ordering(203) 00:28:57.572 fused_ordering(204) 00:28:57.572 fused_ordering(205) 00:28:57.830 fused_ordering(206) 00:28:57.830 fused_ordering(207) 00:28:57.830 fused_ordering(208) 00:28:57.830 fused_ordering(209) 00:28:57.830 fused_ordering(210) 00:28:57.830 fused_ordering(211) 00:28:57.830 fused_ordering(212) 00:28:57.830 fused_ordering(213) 00:28:57.830 fused_ordering(214) 00:28:57.830 fused_ordering(215) 00:28:57.830 fused_ordering(216) 00:28:57.830 fused_ordering(217) 00:28:57.830 fused_ordering(218) 00:28:57.830 fused_ordering(219) 00:28:57.830 fused_ordering(220) 00:28:57.830 fused_ordering(221) 00:28:57.830 fused_ordering(222) 00:28:57.830 fused_ordering(223) 00:28:57.830 fused_ordering(224) 00:28:57.830 fused_ordering(225) 00:28:57.830 fused_ordering(226) 00:28:57.830 fused_ordering(227) 00:28:57.830 fused_ordering(228) 00:28:57.830 fused_ordering(229) 00:28:57.830 fused_ordering(230) 00:28:57.830 fused_ordering(231) 00:28:57.830 fused_ordering(232) 00:28:57.830 fused_ordering(233) 00:28:57.830 fused_ordering(234) 00:28:57.830 fused_ordering(235) 00:28:57.830 fused_ordering(236) 00:28:57.830 fused_ordering(237) 00:28:57.830 fused_ordering(238) 00:28:57.830 fused_ordering(239) 00:28:57.830 fused_ordering(240) 00:28:57.830 fused_ordering(241) 00:28:57.830 fused_ordering(242) 00:28:57.830 fused_ordering(243) 00:28:57.830 fused_ordering(244) 00:28:57.830 fused_ordering(245) 00:28:57.830 fused_ordering(246) 00:28:57.830 fused_ordering(247) 00:28:57.830 fused_ordering(248) 00:28:57.830 fused_ordering(249) 00:28:57.830 fused_ordering(250) 00:28:57.830 fused_ordering(251) 00:28:57.830 fused_ordering(252) 00:28:57.830 fused_ordering(253) 00:28:57.830 fused_ordering(254) 00:28:57.830 fused_ordering(255) 00:28:57.830 fused_ordering(256) 00:28:57.830 fused_ordering(257) 00:28:57.830 fused_ordering(258) 00:28:57.830 fused_ordering(259) 00:28:57.830 fused_ordering(260) 00:28:57.830 fused_ordering(261) 00:28:57.830 fused_ordering(262) 00:28:57.830 fused_ordering(263) 00:28:57.830 fused_ordering(264) 00:28:57.830 fused_ordering(265) 00:28:57.830 fused_ordering(266) 00:28:57.830 fused_ordering(267) 00:28:57.830 fused_ordering(268) 00:28:57.830 fused_ordering(269) 00:28:57.830 fused_ordering(270) 00:28:57.830 fused_ordering(271) 00:28:57.830 fused_ordering(272) 00:28:57.830 fused_ordering(273) 00:28:57.830 fused_ordering(274) 00:28:57.830 fused_ordering(275) 00:28:57.830 fused_ordering(276) 00:28:57.830 fused_ordering(277) 00:28:57.830 fused_ordering(278) 00:28:57.830 fused_ordering(279) 00:28:57.830 fused_ordering(280) 00:28:57.830 fused_ordering(281) 00:28:57.830 fused_ordering(282) 00:28:57.830 fused_ordering(283) 00:28:57.830 fused_ordering(284) 00:28:57.830 fused_ordering(285) 00:28:57.830 fused_ordering(286) 00:28:57.830 fused_ordering(287) 00:28:57.830 fused_ordering(288) 00:28:57.830 fused_ordering(289) 00:28:57.830 fused_ordering(290) 00:28:57.830 fused_ordering(291) 00:28:57.830 fused_ordering(292) 00:28:57.830 fused_ordering(293) 00:28:57.830 fused_ordering(294) 00:28:57.830 fused_ordering(295) 00:28:57.830 fused_ordering(296) 00:28:57.830 fused_ordering(297) 00:28:57.830 fused_ordering(298) 00:28:57.830 fused_ordering(299) 00:28:57.830 fused_ordering(300) 00:28:57.830 fused_ordering(301) 00:28:57.830 fused_ordering(302) 00:28:57.830 fused_ordering(303) 00:28:57.830 fused_ordering(304) 00:28:57.830 fused_ordering(305) 00:28:57.830 fused_ordering(306) 00:28:57.830 fused_ordering(307) 00:28:57.830 fused_ordering(308) 00:28:57.830 fused_ordering(309) 00:28:57.830 fused_ordering(310) 00:28:57.830 fused_ordering(311) 00:28:57.830 fused_ordering(312) 00:28:57.830 fused_ordering(313) 00:28:57.830 fused_ordering(314) 00:28:57.830 fused_ordering(315) 00:28:57.830 fused_ordering(316) 00:28:57.830 fused_ordering(317) 00:28:57.830 fused_ordering(318) 00:28:57.830 fused_ordering(319) 00:28:57.830 fused_ordering(320) 00:28:57.830 fused_ordering(321) 00:28:57.830 fused_ordering(322) 00:28:57.830 fused_ordering(323) 00:28:57.830 fused_ordering(324) 00:28:57.830 fused_ordering(325) 00:28:57.830 fused_ordering(326) 00:28:57.830 fused_ordering(327) 00:28:57.830 fused_ordering(328) 00:28:57.830 fused_ordering(329) 00:28:57.830 fused_ordering(330) 00:28:57.830 fused_ordering(331) 00:28:57.830 fused_ordering(332) 00:28:57.830 fused_ordering(333) 00:28:57.830 fused_ordering(334) 00:28:57.830 fused_ordering(335) 00:28:57.830 fused_ordering(336) 00:28:57.830 fused_ordering(337) 00:28:57.830 fused_ordering(338) 00:28:57.830 fused_ordering(339) 00:28:57.830 fused_ordering(340) 00:28:57.830 fused_ordering(341) 00:28:57.830 fused_ordering(342) 00:28:57.830 fused_ordering(343) 00:28:57.830 fused_ordering(344) 00:28:57.830 fused_ordering(345) 00:28:57.830 fused_ordering(346) 00:28:57.830 fused_ordering(347) 00:28:57.830 fused_ordering(348) 00:28:57.830 fused_ordering(349) 00:28:57.830 fused_ordering(350) 00:28:57.830 fused_ordering(351) 00:28:57.830 fused_ordering(352) 00:28:57.830 fused_ordering(353) 00:28:57.830 fused_ordering(354) 00:28:57.830 fused_ordering(355) 00:28:57.830 fused_ordering(356) 00:28:57.830 fused_ordering(357) 00:28:57.830 fused_ordering(358) 00:28:57.830 fused_ordering(359) 00:28:57.830 fused_ordering(360) 00:28:57.830 fused_ordering(361) 00:28:57.830 fused_ordering(362) 00:28:57.830 fused_ordering(363) 00:28:57.830 fused_ordering(364) 00:28:57.830 fused_ordering(365) 00:28:57.830 fused_ordering(366) 00:28:57.830 fused_ordering(367) 00:28:57.830 fused_ordering(368) 00:28:57.830 fused_ordering(369) 00:28:57.830 fused_ordering(370) 00:28:57.830 fused_ordering(371) 00:28:57.830 fused_ordering(372) 00:28:57.830 fused_ordering(373) 00:28:57.830 fused_ordering(374) 00:28:57.830 fused_ordering(375) 00:28:57.830 fused_ordering(376) 00:28:57.830 fused_ordering(377) 00:28:57.830 fused_ordering(378) 00:28:57.830 fused_ordering(379) 00:28:57.830 fused_ordering(380) 00:28:57.830 fused_ordering(381) 00:28:57.830 fused_ordering(382) 00:28:57.830 fused_ordering(383) 00:28:57.830 fused_ordering(384) 00:28:57.830 fused_ordering(385) 00:28:57.830 fused_ordering(386) 00:28:57.830 fused_ordering(387) 00:28:57.830 fused_ordering(388) 00:28:57.830 fused_ordering(389) 00:28:57.830 fused_ordering(390) 00:28:57.830 fused_ordering(391) 00:28:57.830 fused_ordering(392) 00:28:57.830 fused_ordering(393) 00:28:57.830 fused_ordering(394) 00:28:57.830 fused_ordering(395) 00:28:57.830 fused_ordering(396) 00:28:57.830 fused_ordering(397) 00:28:57.830 fused_ordering(398) 00:28:57.830 fused_ordering(399) 00:28:57.830 fused_ordering(400) 00:28:57.830 fused_ordering(401) 00:28:57.830 fused_ordering(402) 00:28:57.830 fused_ordering(403) 00:28:57.830 fused_ordering(404) 00:28:57.830 fused_ordering(405) 00:28:57.830 fused_ordering(406) 00:28:57.830 fused_ordering(407) 00:28:57.830 fused_ordering(408) 00:28:57.830 fused_ordering(409) 00:28:57.830 fused_ordering(410) 00:28:58.089 fused_ordering(411) 00:28:58.089 fused_ordering(412) 00:28:58.089 fused_ordering(413) 00:28:58.089 fused_ordering(414) 00:28:58.089 fused_ordering(415) 00:28:58.089 fused_ordering(416) 00:28:58.089 fused_ordering(417) 00:28:58.089 fused_ordering(418) 00:28:58.089 fused_ordering(419) 00:28:58.089 fused_ordering(420) 00:28:58.089 fused_ordering(421) 00:28:58.089 fused_ordering(422) 00:28:58.089 fused_ordering(423) 00:28:58.089 fused_ordering(424) 00:28:58.089 fused_ordering(425) 00:28:58.089 fused_ordering(426) 00:28:58.089 fused_ordering(427) 00:28:58.089 fused_ordering(428) 00:28:58.089 fused_ordering(429) 00:28:58.089 fused_ordering(430) 00:28:58.089 fused_ordering(431) 00:28:58.089 fused_ordering(432) 00:28:58.089 fused_ordering(433) 00:28:58.089 fused_ordering(434) 00:28:58.089 fused_ordering(435) 00:28:58.089 fused_ordering(436) 00:28:58.089 fused_ordering(437) 00:28:58.089 fused_ordering(438) 00:28:58.089 fused_ordering(439) 00:28:58.089 fused_ordering(440) 00:28:58.089 fused_ordering(441) 00:28:58.089 fused_ordering(442) 00:28:58.089 fused_ordering(443) 00:28:58.089 fused_ordering(444) 00:28:58.089 fused_ordering(445) 00:28:58.089 fused_ordering(446) 00:28:58.089 fused_ordering(447) 00:28:58.089 fused_ordering(448) 00:28:58.089 fused_ordering(449) 00:28:58.089 fused_ordering(450) 00:28:58.089 fused_ordering(451) 00:28:58.089 fused_ordering(452) 00:28:58.089 fused_ordering(453) 00:28:58.089 fused_ordering(454) 00:28:58.089 fused_ordering(455) 00:28:58.089 fused_ordering(456) 00:28:58.089 fused_ordering(457) 00:28:58.089 fused_ordering(458) 00:28:58.089 fused_ordering(459) 00:28:58.089 fused_ordering(460) 00:28:58.089 fused_ordering(461) 00:28:58.089 fused_ordering(462) 00:28:58.089 fused_ordering(463) 00:28:58.089 fused_ordering(464) 00:28:58.089 fused_ordering(465) 00:28:58.089 fused_ordering(466) 00:28:58.089 fused_ordering(467) 00:28:58.089 fused_ordering(468) 00:28:58.089 fused_ordering(469) 00:28:58.089 fused_ordering(470) 00:28:58.089 fused_ordering(471) 00:28:58.089 fused_ordering(472) 00:28:58.089 fused_ordering(473) 00:28:58.089 fused_ordering(474) 00:28:58.089 fused_ordering(475) 00:28:58.089 fused_ordering(476) 00:28:58.089 fused_ordering(477) 00:28:58.089 fused_ordering(478) 00:28:58.089 fused_ordering(479) 00:28:58.089 fused_ordering(480) 00:28:58.089 fused_ordering(481) 00:28:58.089 fused_ordering(482) 00:28:58.089 fused_ordering(483) 00:28:58.089 fused_ordering(484) 00:28:58.089 fused_ordering(485) 00:28:58.089 fused_ordering(486) 00:28:58.089 fused_ordering(487) 00:28:58.089 fused_ordering(488) 00:28:58.089 fused_ordering(489) 00:28:58.089 fused_ordering(490) 00:28:58.089 fused_ordering(491) 00:28:58.089 fused_ordering(492) 00:28:58.089 fused_ordering(493) 00:28:58.089 fused_ordering(494) 00:28:58.089 fused_ordering(495) 00:28:58.089 fused_ordering(496) 00:28:58.089 fused_ordering(497) 00:28:58.089 fused_ordering(498) 00:28:58.089 fused_ordering(499) 00:28:58.089 fused_ordering(500) 00:28:58.089 fused_ordering(501) 00:28:58.089 fused_ordering(502) 00:28:58.089 fused_ordering(503) 00:28:58.089 fused_ordering(504) 00:28:58.089 fused_ordering(505) 00:28:58.089 fused_ordering(506) 00:28:58.089 fused_ordering(507) 00:28:58.089 fused_ordering(508) 00:28:58.089 fused_ordering(509) 00:28:58.089 fused_ordering(510) 00:28:58.089 fused_ordering(511) 00:28:58.089 fused_ordering(512) 00:28:58.089 fused_ordering(513) 00:28:58.089 fused_ordering(514) 00:28:58.089 fused_ordering(515) 00:28:58.089 fused_ordering(516) 00:28:58.089 fused_ordering(517) 00:28:58.089 fused_ordering(518) 00:28:58.089 fused_ordering(519) 00:28:58.089 fused_ordering(520) 00:28:58.089 fused_ordering(521) 00:28:58.089 fused_ordering(522) 00:28:58.089 fused_ordering(523) 00:28:58.089 fused_ordering(524) 00:28:58.089 fused_ordering(525) 00:28:58.089 fused_ordering(526) 00:28:58.089 fused_ordering(527) 00:28:58.089 fused_ordering(528) 00:28:58.089 fused_ordering(529) 00:28:58.089 fused_ordering(530) 00:28:58.089 fused_ordering(531) 00:28:58.089 fused_ordering(532) 00:28:58.089 fused_ordering(533) 00:28:58.089 fused_ordering(534) 00:28:58.089 fused_ordering(535) 00:28:58.089 fused_ordering(536) 00:28:58.089 fused_ordering(537) 00:28:58.089 fused_ordering(538) 00:28:58.089 fused_ordering(539) 00:28:58.089 fused_ordering(540) 00:28:58.089 fused_ordering(541) 00:28:58.089 fused_ordering(542) 00:28:58.089 fused_ordering(543) 00:28:58.089 fused_ordering(544) 00:28:58.089 fused_ordering(545) 00:28:58.089 fused_ordering(546) 00:28:58.089 fused_ordering(547) 00:28:58.089 fused_ordering(548) 00:28:58.089 fused_ordering(549) 00:28:58.089 fused_ordering(550) 00:28:58.089 fused_ordering(551) 00:28:58.089 fused_ordering(552) 00:28:58.089 fused_ordering(553) 00:28:58.089 fused_ordering(554) 00:28:58.089 fused_ordering(555) 00:28:58.089 fused_ordering(556) 00:28:58.089 fused_ordering(557) 00:28:58.089 fused_ordering(558) 00:28:58.089 fused_ordering(559) 00:28:58.089 fused_ordering(560) 00:28:58.089 fused_ordering(561) 00:28:58.089 fused_ordering(562) 00:28:58.089 fused_ordering(563) 00:28:58.089 fused_ordering(564) 00:28:58.089 fused_ordering(565) 00:28:58.089 fused_ordering(566) 00:28:58.089 fused_ordering(567) 00:28:58.089 fused_ordering(568) 00:28:58.089 fused_ordering(569) 00:28:58.089 fused_ordering(570) 00:28:58.090 fused_ordering(571) 00:28:58.090 fused_ordering(572) 00:28:58.090 fused_ordering(573) 00:28:58.090 fused_ordering(574) 00:28:58.090 fused_ordering(575) 00:28:58.090 fused_ordering(576) 00:28:58.090 fused_ordering(577) 00:28:58.090 fused_ordering(578) 00:28:58.090 fused_ordering(579) 00:28:58.090 fused_ordering(580) 00:28:58.090 fused_ordering(581) 00:28:58.090 fused_ordering(582) 00:28:58.090 fused_ordering(583) 00:28:58.090 fused_ordering(584) 00:28:58.090 fused_ordering(585) 00:28:58.090 fused_ordering(586) 00:28:58.090 fused_ordering(587) 00:28:58.090 fused_ordering(588) 00:28:58.090 fused_ordering(589) 00:28:58.090 fused_ordering(590) 00:28:58.090 fused_ordering(591) 00:28:58.090 fused_ordering(592) 00:28:58.090 fused_ordering(593) 00:28:58.090 fused_ordering(594) 00:28:58.090 fused_ordering(595) 00:28:58.090 fused_ordering(596) 00:28:58.090 fused_ordering(597) 00:28:58.090 fused_ordering(598) 00:28:58.090 fused_ordering(599) 00:28:58.090 fused_ordering(600) 00:28:58.090 fused_ordering(601) 00:28:58.090 fused_ordering(602) 00:28:58.090 fused_ordering(603) 00:28:58.090 fused_ordering(604) 00:28:58.090 fused_ordering(605) 00:28:58.090 fused_ordering(606) 00:28:58.090 fused_ordering(607) 00:28:58.090 fused_ordering(608) 00:28:58.090 fused_ordering(609) 00:28:58.090 fused_ordering(610) 00:28:58.090 fused_ordering(611) 00:28:58.090 fused_ordering(612) 00:28:58.090 fused_ordering(613) 00:28:58.090 fused_ordering(614) 00:28:58.090 fused_ordering(615) 00:28:58.656 fused_ordering(616) 00:28:58.656 fused_ordering(617) 00:28:58.656 fused_ordering(618) 00:28:58.656 fused_ordering(619) 00:28:58.656 fused_ordering(620) 00:28:58.656 fused_ordering(621) 00:28:58.656 fused_ordering(622) 00:28:58.656 fused_ordering(623) 00:28:58.656 fused_ordering(624) 00:28:58.656 fused_ordering(625) 00:28:58.656 fused_ordering(626) 00:28:58.656 fused_ordering(627) 00:28:58.656 fused_ordering(628) 00:28:58.656 fused_ordering(629) 00:28:58.656 fused_ordering(630) 00:28:58.656 fused_ordering(631) 00:28:58.656 fused_ordering(632) 00:28:58.656 fused_ordering(633) 00:28:58.656 fused_ordering(634) 00:28:58.656 fused_ordering(635) 00:28:58.656 fused_ordering(636) 00:28:58.656 fused_ordering(637) 00:28:58.656 fused_ordering(638) 00:28:58.656 fused_ordering(639) 00:28:58.656 fused_ordering(640) 00:28:58.656 fused_ordering(641) 00:28:58.656 fused_ordering(642) 00:28:58.656 fused_ordering(643) 00:28:58.656 fused_ordering(644) 00:28:58.656 fused_ordering(645) 00:28:58.656 fused_ordering(646) 00:28:58.656 fused_ordering(647) 00:28:58.656 fused_ordering(648) 00:28:58.656 fused_ordering(649) 00:28:58.656 fused_ordering(650) 00:28:58.656 fused_ordering(651) 00:28:58.656 fused_ordering(652) 00:28:58.656 fused_ordering(653) 00:28:58.656 fused_ordering(654) 00:28:58.656 fused_ordering(655) 00:28:58.656 fused_ordering(656) 00:28:58.656 fused_ordering(657) 00:28:58.656 fused_ordering(658) 00:28:58.656 fused_ordering(659) 00:28:58.656 fused_ordering(660) 00:28:58.656 fused_ordering(661) 00:28:58.656 fused_ordering(662) 00:28:58.656 fused_ordering(663) 00:28:58.656 fused_ordering(664) 00:28:58.656 fused_ordering(665) 00:28:58.656 fused_ordering(666) 00:28:58.656 fused_ordering(667) 00:28:58.656 fused_ordering(668) 00:28:58.656 fused_ordering(669) 00:28:58.656 fused_ordering(670) 00:28:58.656 fused_ordering(671) 00:28:58.656 fused_ordering(672) 00:28:58.656 fused_ordering(673) 00:28:58.656 fused_ordering(674) 00:28:58.656 fused_ordering(675) 00:28:58.656 fused_ordering(676) 00:28:58.656 fused_ordering(677) 00:28:58.656 fused_ordering(678) 00:28:58.656 fused_ordering(679) 00:28:58.656 fused_ordering(680) 00:28:58.656 fused_ordering(681) 00:28:58.656 fused_ordering(682) 00:28:58.656 fused_ordering(683) 00:28:58.656 fused_ordering(684) 00:28:58.656 fused_ordering(685) 00:28:58.656 fused_ordering(686) 00:28:58.656 fused_ordering(687) 00:28:58.656 fused_ordering(688) 00:28:58.656 fused_ordering(689) 00:28:58.656 fused_ordering(690) 00:28:58.656 fused_ordering(691) 00:28:58.656 fused_ordering(692) 00:28:58.656 fused_ordering(693) 00:28:58.656 fused_ordering(694) 00:28:58.656 fused_ordering(695) 00:28:58.656 fused_ordering(696) 00:28:58.656 fused_ordering(697) 00:28:58.656 fused_ordering(698) 00:28:58.656 fused_ordering(699) 00:28:58.656 fused_ordering(700) 00:28:58.656 fused_ordering(701) 00:28:58.656 fused_ordering(702) 00:28:58.656 fused_ordering(703) 00:28:58.656 fused_ordering(704) 00:28:58.656 fused_ordering(705) 00:28:58.656 fused_ordering(706) 00:28:58.656 fused_ordering(707) 00:28:58.656 fused_ordering(708) 00:28:58.656 fused_ordering(709) 00:28:58.656 fused_ordering(710) 00:28:58.656 fused_ordering(711) 00:28:58.656 fused_ordering(712) 00:28:58.656 fused_ordering(713) 00:28:58.656 fused_ordering(714) 00:28:58.656 fused_ordering(715) 00:28:58.656 fused_ordering(716) 00:28:58.656 fused_ordering(717) 00:28:58.656 fused_ordering(718) 00:28:58.656 fused_ordering(719) 00:28:58.656 fused_ordering(720) 00:28:58.656 fused_ordering(721) 00:28:58.656 fused_ordering(722) 00:28:58.656 fused_ordering(723) 00:28:58.656 fused_ordering(724) 00:28:58.656 fused_ordering(725) 00:28:58.656 fused_ordering(726) 00:28:58.656 fused_ordering(727) 00:28:58.656 fused_ordering(728) 00:28:58.656 fused_ordering(729) 00:28:58.656 fused_ordering(730) 00:28:58.656 fused_ordering(731) 00:28:58.656 fused_ordering(732) 00:28:58.656 fused_ordering(733) 00:28:58.656 fused_ordering(734) 00:28:58.656 fused_ordering(735) 00:28:58.656 fused_ordering(736) 00:28:58.656 fused_ordering(737) 00:28:58.656 fused_ordering(738) 00:28:58.656 fused_ordering(739) 00:28:58.656 fused_ordering(740) 00:28:58.656 fused_ordering(741) 00:28:58.656 fused_ordering(742) 00:28:58.656 fused_ordering(743) 00:28:58.656 fused_ordering(744) 00:28:58.656 fused_ordering(745) 00:28:58.656 fused_ordering(746) 00:28:58.656 fused_ordering(747) 00:28:58.656 fused_ordering(748) 00:28:58.656 fused_ordering(749) 00:28:58.656 fused_ordering(750) 00:28:58.656 fused_ordering(751) 00:28:58.656 fused_ordering(752) 00:28:58.656 fused_ordering(753) 00:28:58.656 fused_ordering(754) 00:28:58.656 fused_ordering(755) 00:28:58.656 fused_ordering(756) 00:28:58.656 fused_ordering(757) 00:28:58.656 fused_ordering(758) 00:28:58.656 fused_ordering(759) 00:28:58.656 fused_ordering(760) 00:28:58.656 fused_ordering(761) 00:28:58.656 fused_ordering(762) 00:28:58.656 fused_ordering(763) 00:28:58.656 fused_ordering(764) 00:28:58.656 fused_ordering(765) 00:28:58.656 fused_ordering(766) 00:28:58.656 fused_ordering(767) 00:28:58.656 fused_ordering(768) 00:28:58.656 fused_ordering(769) 00:28:58.656 fused_ordering(770) 00:28:58.656 fused_ordering(771) 00:28:58.656 fused_ordering(772) 00:28:58.656 fused_ordering(773) 00:28:58.656 fused_ordering(774) 00:28:58.656 fused_ordering(775) 00:28:58.656 fused_ordering(776) 00:28:58.656 fused_ordering(777) 00:28:58.656 fused_ordering(778) 00:28:58.656 fused_ordering(779) 00:28:58.656 fused_ordering(780) 00:28:58.657 fused_ordering(781) 00:28:58.657 fused_ordering(782) 00:28:58.657 fused_ordering(783) 00:28:58.657 fused_ordering(784) 00:28:58.657 fused_ordering(785) 00:28:58.657 fused_ordering(786) 00:28:58.657 fused_ordering(787) 00:28:58.657 fused_ordering(788) 00:28:58.657 fused_ordering(789) 00:28:58.657 fused_ordering(790) 00:28:58.657 fused_ordering(791) 00:28:58.657 fused_ordering(792) 00:28:58.657 fused_ordering(793) 00:28:58.657 fused_ordering(794) 00:28:58.657 fused_ordering(795) 00:28:58.657 fused_ordering(796) 00:28:58.657 fused_ordering(797) 00:28:58.657 fused_ordering(798) 00:28:58.657 fused_ordering(799) 00:28:58.657 fused_ordering(800) 00:28:58.657 fused_ordering(801) 00:28:58.657 fused_ordering(802) 00:28:58.657 fused_ordering(803) 00:28:58.657 fused_ordering(804) 00:28:58.657 fused_ordering(805) 00:28:58.657 fused_ordering(806) 00:28:58.657 fused_ordering(807) 00:28:58.657 fused_ordering(808) 00:28:58.657 fused_ordering(809) 00:28:58.657 fused_ordering(810) 00:28:58.657 fused_ordering(811) 00:28:58.657 fused_ordering(812) 00:28:58.657 fused_ordering(813) 00:28:58.657 fused_ordering(814) 00:28:58.657 fused_ordering(815) 00:28:58.657 fused_ordering(816) 00:28:58.657 fused_ordering(817) 00:28:58.657 fused_ordering(818) 00:28:58.657 fused_ordering(819) 00:28:58.657 fused_ordering(820) 00:28:59.223 fused_ordering(821) 00:28:59.223 fused_ordering(822) 00:28:59.223 fused_ordering(823) 00:28:59.223 fused_ordering(824) 00:28:59.223 fused_ordering(825) 00:28:59.223 fused_ordering(826) 00:28:59.223 fused_ordering(827) 00:28:59.223 fused_ordering(828) 00:28:59.223 fused_ordering(829) 00:28:59.223 fused_ordering(830) 00:28:59.223 fused_ordering(831) 00:28:59.223 fused_ordering(832) 00:28:59.223 fused_ordering(833) 00:28:59.223 fused_ordering(834) 00:28:59.223 fused_ordering(835) 00:28:59.223 fused_ordering(836) 00:28:59.223 fused_ordering(837) 00:28:59.223 fused_ordering(838) 00:28:59.223 fused_ordering(839) 00:28:59.223 fused_ordering(840) 00:28:59.223 fused_ordering(841) 00:28:59.223 fused_ordering(842) 00:28:59.223 fused_ordering(843) 00:28:59.223 fused_ordering(844) 00:28:59.223 fused_ordering(845) 00:28:59.223 fused_ordering(846) 00:28:59.223 fused_ordering(847) 00:28:59.223 fused_ordering(848) 00:28:59.223 fused_ordering(849) 00:28:59.223 fused_ordering(850) 00:28:59.223 fused_ordering(851) 00:28:59.223 fused_ordering(852) 00:28:59.223 fused_ordering(853) 00:28:59.223 fused_ordering(854) 00:28:59.223 fused_ordering(855) 00:28:59.223 fused_ordering(856) 00:28:59.223 fused_ordering(857) 00:28:59.223 fused_ordering(858) 00:28:59.223 fused_ordering(859) 00:28:59.223 fused_ordering(860) 00:28:59.223 fused_ordering(861) 00:28:59.223 fused_ordering(862) 00:28:59.223 fused_ordering(863) 00:28:59.223 fused_ordering(864) 00:28:59.223 fused_ordering(865) 00:28:59.223 fused_ordering(866) 00:28:59.223 fused_ordering(867) 00:28:59.223 fused_ordering(868) 00:28:59.223 fused_ordering(869) 00:28:59.223 fused_ordering(870) 00:28:59.223 fused_ordering(871) 00:28:59.223 fused_ordering(872) 00:28:59.223 fused_ordering(873) 00:28:59.223 fused_ordering(874) 00:28:59.223 fused_ordering(875) 00:28:59.223 fused_ordering(876) 00:28:59.223 fused_ordering(877) 00:28:59.223 fused_ordering(878) 00:28:59.223 fused_ordering(879) 00:28:59.223 fused_ordering(880) 00:28:59.223 fused_ordering(881) 00:28:59.223 fused_ordering(882) 00:28:59.223 fused_ordering(883) 00:28:59.223 fused_ordering(884) 00:28:59.223 fused_ordering(885) 00:28:59.223 fused_ordering(886) 00:28:59.223 fused_ordering(887) 00:28:59.223 fused_ordering(888) 00:28:59.223 fused_ordering(889) 00:28:59.223 fused_ordering(890) 00:28:59.223 fused_ordering(891) 00:28:59.223 fused_ordering(892) 00:28:59.223 fused_ordering(893) 00:28:59.223 fused_ordering(894) 00:28:59.223 fused_ordering(895) 00:28:59.223 fused_ordering(896) 00:28:59.223 fused_ordering(897) 00:28:59.223 fused_ordering(898) 00:28:59.223 fused_ordering(899) 00:28:59.223 fused_ordering(900) 00:28:59.223 fused_ordering(901) 00:28:59.223 fused_ordering(902) 00:28:59.223 fused_ordering(903) 00:28:59.223 fused_ordering(904) 00:28:59.223 fused_ordering(905) 00:28:59.223 fused_ordering(906) 00:28:59.223 fused_ordering(907) 00:28:59.223 fused_ordering(908) 00:28:59.223 fused_ordering(909) 00:28:59.223 fused_ordering(910) 00:28:59.223 fused_ordering(911) 00:28:59.223 fused_ordering(912) 00:28:59.223 fused_ordering(913) 00:28:59.223 fused_ordering(914) 00:28:59.223 fused_ordering(915) 00:28:59.223 fused_ordering(916) 00:28:59.223 fused_ordering(917) 00:28:59.223 fused_ordering(918) 00:28:59.223 fused_ordering(919) 00:28:59.223 fused_ordering(920) 00:28:59.223 fused_ordering(921) 00:28:59.223 fused_ordering(922) 00:28:59.223 fused_ordering(923) 00:28:59.223 fused_ordering(924) 00:28:59.223 fused_ordering(925) 00:28:59.223 fused_ordering(926) 00:28:59.223 fused_ordering(927) 00:28:59.223 fused_ordering(928) 00:28:59.223 fused_ordering(929) 00:28:59.223 fused_ordering(930) 00:28:59.223 fused_ordering(931) 00:28:59.223 fused_ordering(932) 00:28:59.223 fused_ordering(933) 00:28:59.223 fused_ordering(934) 00:28:59.223 fused_ordering(935) 00:28:59.223 fused_ordering(936) 00:28:59.223 fused_ordering(937) 00:28:59.223 fused_ordering(938) 00:28:59.223 fused_ordering(939) 00:28:59.223 fused_ordering(940) 00:28:59.223 fused_ordering(941) 00:28:59.223 fused_ordering(942) 00:28:59.223 fused_ordering(943) 00:28:59.224 fused_ordering(944) 00:28:59.224 fused_ordering(945) 00:28:59.224 fused_ordering(946) 00:28:59.224 fused_ordering(947) 00:28:59.224 fused_ordering(948) 00:28:59.224 fused_ordering(949) 00:28:59.224 fused_ordering(950) 00:28:59.224 fused_ordering(951) 00:28:59.224 fused_ordering(952) 00:28:59.224 fused_ordering(953) 00:28:59.224 fused_ordering(954) 00:28:59.224 fused_ordering(955) 00:28:59.224 fused_ordering(956) 00:28:59.224 fused_ordering(957) 00:28:59.224 fused_ordering(958) 00:28:59.224 fused_ordering(959) 00:28:59.224 fused_ordering(960) 00:28:59.224 fused_ordering(961) 00:28:59.224 fused_ordering(962) 00:28:59.224 fused_ordering(963) 00:28:59.224 fused_ordering(964) 00:28:59.224 fused_ordering(965) 00:28:59.224 fused_ordering(966) 00:28:59.224 fused_ordering(967) 00:28:59.224 fused_ordering(968) 00:28:59.224 fused_ordering(969) 00:28:59.224 fused_ordering(970) 00:28:59.224 fused_ordering(971) 00:28:59.224 fused_ordering(972) 00:28:59.224 fused_ordering(973) 00:28:59.224 fused_ordering(974) 00:28:59.224 fused_ordering(975) 00:28:59.224 fused_ordering(976) 00:28:59.224 fused_ordering(977) 00:28:59.224 fused_ordering(978) 00:28:59.224 fused_ordering(979) 00:28:59.224 fused_ordering(980) 00:28:59.224 fused_ordering(981) 00:28:59.224 fused_ordering(982) 00:28:59.224 fused_ordering(983) 00:28:59.224 fused_ordering(984) 00:28:59.224 fused_ordering(985) 00:28:59.224 fused_ordering(986) 00:28:59.224 fused_ordering(987) 00:28:59.224 fused_ordering(988) 00:28:59.224 fused_ordering(989) 00:28:59.224 fused_ordering(990) 00:28:59.224 fused_ordering(991) 00:28:59.224 fused_ordering(992) 00:28:59.224 fused_ordering(993) 00:28:59.224 fused_ordering(994) 00:28:59.224 fused_ordering(995) 00:28:59.224 fused_ordering(996) 00:28:59.224 fused_ordering(997) 00:28:59.224 fused_ordering(998) 00:28:59.224 fused_ordering(999) 00:28:59.224 fused_ordering(1000) 00:28:59.224 fused_ordering(1001) 00:28:59.224 fused_ordering(1002) 00:28:59.224 fused_ordering(1003) 00:28:59.224 fused_ordering(1004) 00:28:59.224 fused_ordering(1005) 00:28:59.224 fused_ordering(1006) 00:28:59.224 fused_ordering(1007) 00:28:59.224 fused_ordering(1008) 00:28:59.224 fused_ordering(1009) 00:28:59.224 fused_ordering(1010) 00:28:59.224 fused_ordering(1011) 00:28:59.224 fused_ordering(1012) 00:28:59.224 fused_ordering(1013) 00:28:59.224 fused_ordering(1014) 00:28:59.224 fused_ordering(1015) 00:28:59.224 fused_ordering(1016) 00:28:59.224 fused_ordering(1017) 00:28:59.224 fused_ordering(1018) 00:28:59.224 fused_ordering(1019) 00:28:59.224 fused_ordering(1020) 00:28:59.224 fused_ordering(1021) 00:28:59.224 fused_ordering(1022) 00:28:59.224 fused_ordering(1023) 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.224 rmmod nvme_tcp 00:28:59.224 rmmod nvme_fabrics 00:28:59.224 rmmod nvme_keyring 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 87323 ']' 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 87323 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' -z 87323 ']' 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # kill -0 87323 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # uname 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 87323 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:59.224 killing process with pid 87323 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # echo 'killing process with pid 87323' 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # kill 87323 00:28:59.224 [2024-05-15 00:55:02.393527] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:59.224 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # wait 87323 00:28:59.482 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:59.482 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:59.482 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:59.482 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:59.482 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:59.482 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.482 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.482 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.482 00:55:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:59.482 00:28:59.482 real 0m4.161s 00:28:59.482 user 0m5.013s 00:28:59.482 sys 0m1.386s 00:28:59.482 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:59.482 00:55:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:28:59.482 ************************************ 00:28:59.482 END TEST nvmf_fused_ordering 00:28:59.482 ************************************ 00:28:59.482 00:55:02 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:28:59.482 00:55:02 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:28:59.482 00:55:02 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:59.482 00:55:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:59.482 ************************************ 00:28:59.482 START TEST nvmf_delete_subsystem 00:28:59.482 ************************************ 00:28:59.482 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:28:59.742 * Looking for test storage... 00:28:59.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.742 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:59.743 Cannot find device "nvmf_tgt_br" 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:59.743 Cannot find device "nvmf_tgt_br2" 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:59.743 Cannot find device "nvmf_tgt_br" 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:59.743 Cannot find device "nvmf_tgt_br2" 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:59.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:59.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:59.743 00:55:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:59.743 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:59.743 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:59.743 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:00.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:29:00.003 00:29:00.003 --- 10.0.0.2 ping statistics --- 00:29:00.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.003 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:00.003 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:00.003 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:29:00.003 00:29:00.003 --- 10.0.0.3 ping statistics --- 00:29:00.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.003 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:00.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:29:00.003 00:29:00.003 --- 10.0.0.1 ping statistics --- 00:29:00.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.003 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=87587 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 87587 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # '[' -z 87587 ']' 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:00.003 00:55:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:00.003 [2024-05-15 00:55:03.230343] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:00.003 [2024-05-15 00:55:03.230858] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.290 [2024-05-15 00:55:03.387471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:00.290 [2024-05-15 00:55:03.491739] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.290 [2024-05-15 00:55:03.492480] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.291 [2024-05-15 00:55:03.492823] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.291 [2024-05-15 00:55:03.493140] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.291 [2024-05-15 00:55:03.493373] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.291 [2024-05-15 00:55:03.493696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.291 [2024-05-15 00:55:03.493710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@861 -- # return 0 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.267 [2024-05-15 00:55:04.350622] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.267 [2024-05-15 00:55:04.367722] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:01.267 [2024-05-15 00:55:04.368316] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.267 NULL1 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.267 Delay0 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=87638 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:01.267 00:55:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:01.525 [2024-05-15 00:55:04.572147] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:03.428 00:55:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:03.428 00:55:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.428 00:55:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 starting I/O failed: -6 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 starting I/O failed: -6 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 starting I/O failed: -6 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 starting I/O failed: -6 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 starting I/O failed: -6 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 starting I/O failed: -6 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 starting I/O failed: -6 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 starting I/O failed: -6 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 starting I/O failed: -6 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 starting I/O failed: -6 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 starting I/O failed: -6 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 [2024-05-15 00:55:06.609024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb478e0 is same with the state(5) to be set 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Read completed with error (sct=0, sc=8) 00:29:03.428 Write completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 [2024-05-15 00:55:06.610553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f40fc000c00 is same with the state(5) to be set 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 Read completed with error (sct=0, sc=8) 00:29:03.429 starting I/O failed: -6 00:29:03.429 Write completed with error (sct=0, sc=8) 00:29:03.429 [2024-05-15 00:55:06.611056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f40fc00c2f0 is same with the state(5) to be set 00:29:04.367 [2024-05-15 00:55:07.586251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb656c0 is same with the state(5) to be set 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 [2024-05-15 00:55:07.606080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb47700 is same with the state(5) to be set 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 [2024-05-15 00:55:07.608526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb47ac0 is same with the state(5) to be set 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 [2024-05-15 00:55:07.609949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f40fc00c600 is same with the state(5) to be set 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Write completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 Read completed with error (sct=0, sc=8) 00:29:04.367 [2024-05-15 00:55:07.610541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f40fc00bfe0 is same with the state(5) to be set 00:29:04.367 Initializing NVMe Controllers 00:29:04.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.367 Controller IO queue size 128, less than required. 00:29:04.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:04.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:04.367 Initialization complete. Launching workers. 00:29:04.367 ======================================================== 00:29:04.367 Latency(us) 00:29:04.367 Device Information : IOPS MiB/s Average min max 00:29:04.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.18 0.08 907278.76 352.33 2003338.43 00:29:04.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.20 0.08 938924.61 436.96 2003493.87 00:29:04.367 ======================================================== 00:29:04.367 Total : 342.38 0.17 923009.96 352.33 2003493.87 00:29:04.367 00:29:04.367 [2024-05-15 00:55:07.611134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb656c0 (9): Bad file descriptor 00:29:04.367 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:04.367 00:55:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.367 00:55:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:04.367 00:55:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 87638 00:29:04.367 00:55:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 87638 00:29:04.936 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (87638) - No such process 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 87638 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 87638 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 87638 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:04.936 [2024-05-15 00:55:08.138930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.936 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=87688 00:29:04.937 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:04.937 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:04.937 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87688 00:29:04.937 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:05.195 [2024-05-15 00:55:08.316673] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:05.453 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:05.453 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87688 00:29:05.453 00:55:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:06.019 00:55:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:06.019 00:55:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87688 00:29:06.019 00:55:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:06.586 00:55:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:06.586 00:55:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87688 00:29:06.586 00:55:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:07.152 00:55:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:07.152 00:55:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87688 00:29:07.152 00:55:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:07.411 00:55:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:07.411 00:55:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87688 00:29:07.411 00:55:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:07.978 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:07.978 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87688 00:29:07.978 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:08.234 Initializing NVMe Controllers 00:29:08.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:08.234 Controller IO queue size 128, less than required. 00:29:08.234 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:08.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:08.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:08.234 Initialization complete. Launching workers. 00:29:08.234 ======================================================== 00:29:08.234 Latency(us) 00:29:08.234 Device Information : IOPS MiB/s Average min max 00:29:08.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003273.68 1000140.36 1042829.48 00:29:08.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006041.91 1000233.89 1042750.59 00:29:08.234 ======================================================== 00:29:08.234 Total : 256.00 0.12 1004657.80 1000140.36 1042829.48 00:29:08.234 00:29:08.492 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:08.492 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87688 00:29:08.492 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (87688) - No such process 00:29:08.492 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 87688 00:29:08.492 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:08.492 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:08.492 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:08.492 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:29:08.492 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:08.492 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:29:08.492 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:08.492 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:08.492 rmmod nvme_tcp 00:29:08.492 rmmod nvme_fabrics 00:29:08.492 rmmod nvme_keyring 00:29:08.492 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:08.492 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:29:08.750 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:29:08.750 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 87587 ']' 00:29:08.750 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 87587 00:29:08.750 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' -z 87587 ']' 00:29:08.750 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # kill -0 87587 00:29:08.750 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # uname 00:29:08.750 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:08.750 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 87587 00:29:08.750 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:08.750 killing process with pid 87587 00:29:08.750 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:08.750 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # echo 'killing process with pid 87587' 00:29:08.750 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # kill 87587 00:29:08.750 [2024-05-15 00:55:11.804042] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:08.750 00:55:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # wait 87587 00:29:08.750 00:55:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:08.750 00:55:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:08.750 00:55:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:08.750 00:55:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:08.750 00:55:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:08.750 00:55:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.750 00:55:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:08.750 00:55:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.009 00:55:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:09.009 00:29:09.009 real 0m9.347s 00:29:09.009 user 0m28.911s 00:29:09.009 sys 0m1.593s 00:29:09.009 00:55:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:09.009 ************************************ 00:29:09.009 END TEST nvmf_delete_subsystem 00:29:09.009 ************************************ 00:29:09.009 00:55:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:09.009 00:55:12 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:29:09.009 00:55:12 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:09.009 00:55:12 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:09.009 00:55:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:09.009 ************************************ 00:29:09.009 START TEST nvmf_ns_masking 00:29:09.009 ************************************ 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:29:09.009 * Looking for test storage... 00:29:09.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.009 00:55:12 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=794b18ce-4699-46d8-8866-80cf676d07f4 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:09.010 Cannot find device "nvmf_tgt_br" 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:09.010 Cannot find device "nvmf_tgt_br2" 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:09.010 Cannot find device "nvmf_tgt_br" 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:29:09.010 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:09.268 Cannot find device "nvmf_tgt_br2" 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:09.268 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:09.268 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:09.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:29:09.268 00:29:09.268 --- 10.0.0.2 ping statistics --- 00:29:09.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.268 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:29:09.268 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:09.268 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:09.268 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:29:09.268 00:29:09.268 --- 10.0.0.3 ping statistics --- 00:29:09.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.268 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:09.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:29:09.526 00:29:09.526 --- 10.0.0.1 ping statistics --- 00:29:09.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.526 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=87925 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 87925 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # '[' -z 87925 ']' 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:09.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.526 00:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.527 00:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:09.527 00:55:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:29:09.527 [2024-05-15 00:55:12.642128] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:09.527 [2024-05-15 00:55:12.642223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.527 [2024-05-15 00:55:12.788360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.785 [2024-05-15 00:55:12.895511] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.785 [2024-05-15 00:55:12.895588] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.785 [2024-05-15 00:55:12.895626] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.785 [2024-05-15 00:55:12.895646] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.785 [2024-05-15 00:55:12.895656] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.785 [2024-05-15 00:55:12.895825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.785 [2024-05-15 00:55:12.896493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.785 [2024-05-15 00:55:12.896666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.785 [2024-05-15 00:55:12.896673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.353 00:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:10.353 00:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@861 -- # return 0 00:29:10.353 00:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:10.353 00:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:10.353 00:55:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:29:10.612 00:55:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.612 00:55:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:10.612 [2024-05-15 00:55:13.879053] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.871 00:55:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:29:10.871 00:55:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:29:10.871 00:55:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:11.129 Malloc1 00:29:11.129 00:55:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:29:11.388 Malloc2 00:29:11.388 00:55:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:11.646 00:55:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:29:11.646 00:55:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.904 [2024-05-15 00:55:15.129423] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:11.904 [2024-05-15 00:55:15.130148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.904 00:55:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:29:11.904 00:55:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 794b18ce-4699-46d8-8866-80cf676d07f4 -a 10.0.0.2 -s 4420 -i 4 00:29:12.163 00:55:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:29:12.163 00:55:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:29:12.163 00:55:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:29:12.163 00:55:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:29:12.163 00:55:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:29:14.065 [ 0]:0x1 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:29:14.065 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:14.324 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ededc86e705b4947bdc68a30fbcb81cf 00:29:14.324 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ededc86e705b4947bdc68a30fbcb81cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:14.324 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:14.582 [ 0]:0x1 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ededc86e705b4947bdc68a30fbcb81cf 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ededc86e705b4947bdc68a30fbcb81cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:29:14.582 [ 1]:0x2 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f4431773596c48e9a6c3af7782e6687e 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f4431773596c48e9a6c3af7782e6687e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:29:14.582 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:14.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:14.841 00:55:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.100 00:55:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:29:15.361 00:55:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:29:15.361 00:55:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 794b18ce-4699-46d8-8866-80cf676d07f4 -a 10.0.0.2 -s 4420 -i 4 00:29:15.361 00:55:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:29:15.361 00:55:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:29:15.361 00:55:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:29:15.361 00:55:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 1 ]] 00:29:15.361 00:55:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=1 00:29:15.361 00:55:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:17.892 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:29:17.892 [ 0]:0x2 00:29:17.893 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:29:17.893 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:17.893 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f4431773596c48e9a6c3af7782e6687e 00:29:17.893 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f4431773596c48e9a6c3af7782e6687e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:17.893 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:29:17.893 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:29:17.893 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:17.893 00:55:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:29:17.893 [ 0]:0x1 00:29:17.893 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:29:17.893 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:17.893 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ededc86e705b4947bdc68a30fbcb81cf 00:29:17.893 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ededc86e705b4947bdc68a30fbcb81cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:17.893 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:29:17.893 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:17.893 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:29:17.893 [ 1]:0x2 00:29:17.893 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:29:17.893 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:17.893 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f4431773596c48e9a6c3af7782e6687e 00:29:17.893 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f4431773596c48e9a6c3af7782e6687e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:17.893 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:29:18.152 [ 0]:0x2 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:29:18.152 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:18.410 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f4431773596c48e9a6c3af7782e6687e 00:29:18.410 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f4431773596c48e9a6c3af7782e6687e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:18.410 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:29:18.410 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:18.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:18.410 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:29:18.669 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:29:18.669 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 794b18ce-4699-46d8-8866-80cf676d07f4 -a 10.0.0.2 -s 4420 -i 4 00:29:18.669 00:55:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:29:18.669 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:29:18.669 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:29:18.669 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:29:18.669 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:29:18.669 00:55:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:29:21.211 [ 0]:0x1 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:29:21.211 00:55:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ededc86e705b4947bdc68a30fbcb81cf 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ededc86e705b4947bdc68a30fbcb81cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:21.211 [ 1]:0x2 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f4431773596c48e9a6c3af7782e6687e 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f4431773596c48e9a6c3af7782e6687e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:29:21.211 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:21.212 [ 0]:0x2 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f4431773596c48e9a6c3af7782e6687e 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f4431773596c48e9a6c3af7782e6687e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:21.212 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:29:21.471 [2024-05-15 00:55:24.713934] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:29:21.471 2024/05/15 00:55:24 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:29:21.471 request: 00:29:21.471 { 00:29:21.471 "method": "nvmf_ns_remove_host", 00:29:21.471 "params": { 00:29:21.471 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:21.471 "nsid": 2, 00:29:21.471 "host": "nqn.2016-06.io.spdk:host1" 00:29:21.471 } 00:29:21.471 } 00:29:21.471 Got JSON-RPC error response 00:29:21.471 GoRPCClient: error on JSON-RPC call 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:21.471 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:29:21.730 [ 0]:0x2 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f4431773596c48e9a6c3af7782e6687e 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f4431773596c48e9a6c3af7782e6687e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:21.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:21.730 00:55:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.989 00:55:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:21.989 00:55:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:29:21.989 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:21.989 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:29:21.989 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:21.989 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:29:21.989 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:21.989 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:21.989 rmmod nvme_tcp 00:29:22.248 rmmod nvme_fabrics 00:29:22.248 rmmod nvme_keyring 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 87925 ']' 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 87925 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' -z 87925 ']' 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # kill -0 87925 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # uname 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 87925 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:22.248 killing process with pid 87925 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # echo 'killing process with pid 87925' 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # kill 87925 00:29:22.248 [2024-05-15 00:55:25.343304] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:22.248 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@971 -- # wait 87925 00:29:22.518 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:22.518 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:22.518 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:22.518 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:22.518 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:22.518 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.518 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:22.518 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.518 00:55:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:22.518 00:29:22.518 real 0m13.538s 00:29:22.518 user 0m53.945s 00:29:22.518 sys 0m2.459s 00:29:22.518 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:22.518 ************************************ 00:29:22.518 00:55:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:29:22.518 END TEST nvmf_ns_masking 00:29:22.518 ************************************ 00:29:22.518 00:55:25 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:29:22.518 00:55:25 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:29:22.518 00:55:25 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:29:22.518 00:55:25 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:22.518 00:55:25 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:22.518 00:55:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:22.518 ************************************ 00:29:22.518 START TEST nvmf_host_management 00:29:22.518 ************************************ 00:29:22.518 00:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:29:22.518 * Looking for test storage... 00:29:22.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:22.518 00:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:22.518 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:22.796 Cannot find device "nvmf_tgt_br" 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:22.796 Cannot find device "nvmf_tgt_br2" 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:29:22.796 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:22.797 Cannot find device "nvmf_tgt_br" 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:22.797 Cannot find device "nvmf_tgt_br2" 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:22.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:22.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:22.797 00:55:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:22.797 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:22.797 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:22.797 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:22.797 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:22.797 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:22.797 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:22.797 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:22.797 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:22.797 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:23.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:29:23.056 00:29:23.056 --- 10.0.0.2 ping statistics --- 00:29:23.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.056 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:23.056 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:23.056 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:29:23.056 00:29:23.056 --- 10.0.0.3 ping statistics --- 00:29:23.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.056 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:23.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:29:23.056 00:29:23.056 --- 10.0.0.1 ping statistics --- 00:29:23.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.056 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=88483 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 88483 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 88483 ']' 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.056 00:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:23.057 00:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.057 00:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:23.057 00:55:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:23.057 [2024-05-15 00:55:26.267262] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:23.057 [2024-05-15 00:55:26.267359] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.315 [2024-05-15 00:55:26.406886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:23.315 [2024-05-15 00:55:26.505044] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.315 [2024-05-15 00:55:26.505099] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.315 [2024-05-15 00:55:26.505111] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.315 [2024-05-15 00:55:26.505120] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.315 [2024-05-15 00:55:26.505128] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.315 [2024-05-15 00:55:26.505324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.315 [2024-05-15 00:55:26.506038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.315 [2024-05-15 00:55:26.506173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:23.315 [2024-05-15 00:55:26.506288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:24.252 [2024-05-15 00:55:27.334872] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:24.252 Malloc0 00:29:24.252 [2024-05-15 00:55:27.417103] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:24.252 [2024-05-15 00:55:27.417973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=88559 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 88559 /var/tmp/bdevperf.sock 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 88559 ']' 00:29:24.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:24.252 { 00:29:24.252 "params": { 00:29:24.252 "name": "Nvme$subsystem", 00:29:24.252 "trtype": "$TEST_TRANSPORT", 00:29:24.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.252 "adrfam": "ipv4", 00:29:24.252 "trsvcid": "$NVMF_PORT", 00:29:24.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.252 "hdgst": ${hdgst:-false}, 00:29:24.252 "ddgst": ${ddgst:-false} 00:29:24.252 }, 00:29:24.252 "method": "bdev_nvme_attach_controller" 00:29:24.252 } 00:29:24.252 EOF 00:29:24.252 )") 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:29:24.252 00:55:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:24.252 "params": { 00:29:24.252 "name": "Nvme0", 00:29:24.252 "trtype": "tcp", 00:29:24.252 "traddr": "10.0.0.2", 00:29:24.252 "adrfam": "ipv4", 00:29:24.252 "trsvcid": "4420", 00:29:24.252 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.252 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:24.252 "hdgst": false, 00:29:24.252 "ddgst": false 00:29:24.252 }, 00:29:24.252 "method": "bdev_nvme_attach_controller" 00:29:24.252 }' 00:29:24.252 [2024-05-15 00:55:27.519856] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:24.252 [2024-05-15 00:55:27.519953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88559 ] 00:29:24.511 [2024-05-15 00:55:27.664398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.511 [2024-05-15 00:55:27.762241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.768 Running I/O for 10 seconds... 00:29:25.336 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:25.336 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:29:25.336 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:25.336 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.337 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.597 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:29:25.597 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:29:25.597 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:25.597 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:25.597 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:25.597 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:25.597 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.597 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.597 [2024-05-15 00:55:28.648476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.648985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.648994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.649006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.649015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.649027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.649037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.649048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.649057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.649068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.649077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.649088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.649097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.649108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.597 [2024-05-15 00:55:28.649117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.597 [2024-05-15 00:55:28.649130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.598 [2024-05-15 00:55:28.649919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.598 [2024-05-15 00:55:28.649935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2640aa0 is same with the state(5) to be set 00:29:25.598 [2024-05-15 00:55:28.650007] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2640aa0 was disconnected and freed. reset controller. 00:29:25.598 [2024-05-15 00:55:28.651253] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:25.598 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.598 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:25.598 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.598 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.598 task offset: 130944 on job bdev=Nvme0n1 fails 00:29:25.598 00:29:25.599 Latency(us) 00:29:25.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.599 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:25.599 Job: Nvme0n1 ended in about 0.70 seconds with error 00:29:25.599 Verification LBA range: start 0x0 length 0x400 00:29:25.599 Nvme0n1 : 0.70 1371.19 85.70 91.41 0.00 42695.04 6017.40 40274.85 00:29:25.599 =================================================================================================================== 00:29:25.599 Total : 1371.19 85.70 91.41 0.00 42695.04 6017.40 40274.85 00:29:25.599 [2024-05-15 00:55:28.653290] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:25.599 [2024-05-15 00:55:28.653323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222fba0 (9): Bad file descriptor 00:29:25.599 00:55:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.599 00:55:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:25.599 [2024-05-15 00:55:28.659976] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:26.534 00:55:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 88559 00:29:26.534 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (88559) - No such process 00:29:26.534 00:55:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:26.534 00:55:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:26.535 00:55:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:26.535 00:55:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:26.535 00:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:29:26.535 00:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:29:26.535 00:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:26.535 00:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:26.535 { 00:29:26.535 "params": { 00:29:26.535 "name": "Nvme$subsystem", 00:29:26.535 "trtype": "$TEST_TRANSPORT", 00:29:26.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.535 "adrfam": "ipv4", 00:29:26.535 "trsvcid": "$NVMF_PORT", 00:29:26.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.535 "hdgst": ${hdgst:-false}, 00:29:26.535 "ddgst": ${ddgst:-false} 00:29:26.535 }, 00:29:26.535 "method": "bdev_nvme_attach_controller" 00:29:26.535 } 00:29:26.535 EOF 00:29:26.535 )") 00:29:26.535 00:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:29:26.535 00:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:29:26.535 00:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:29:26.535 00:55:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:26.535 "params": { 00:29:26.535 "name": "Nvme0", 00:29:26.535 "trtype": "tcp", 00:29:26.535 "traddr": "10.0.0.2", 00:29:26.535 "adrfam": "ipv4", 00:29:26.535 "trsvcid": "4420", 00:29:26.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.535 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:26.535 "hdgst": false, 00:29:26.535 "ddgst": false 00:29:26.535 }, 00:29:26.535 "method": "bdev_nvme_attach_controller" 00:29:26.535 }' 00:29:26.535 [2024-05-15 00:55:29.725901] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:26.535 [2024-05-15 00:55:29.726011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88605 ] 00:29:26.791 [2024-05-15 00:55:29.868171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.791 [2024-05-15 00:55:29.956154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.048 Running I/O for 1 seconds... 00:29:27.981 00:29:27.981 Latency(us) 00:29:27.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.981 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.981 Verification LBA range: start 0x0 length 0x400 00:29:27.981 Nvme0n1 : 1.01 1578.22 98.64 0.00 0.00 39740.35 5600.35 36938.47 00:29:27.981 =================================================================================================================== 00:29:27.981 Total : 1578.22 98.64 0.00 0.00 39740.35 5600.35 36938.47 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:28.238 rmmod nvme_tcp 00:29:28.238 rmmod nvme_fabrics 00:29:28.238 rmmod nvme_keyring 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 88483 ']' 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 88483 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' -z 88483 ']' 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # kill -0 88483 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # uname 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 88483 00:29:28.238 killing process with pid 88483 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # echo 'killing process with pid 88483' 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # kill 88483 00:29:28.238 [2024-05-15 00:55:31.470581] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:28.238 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@971 -- # wait 88483 00:29:28.496 [2024-05-15 00:55:31.677214] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:28.496 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:28.496 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:28.496 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:28.496 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:28.496 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:28.496 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.496 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:28.496 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.496 00:55:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:28.496 00:55:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:28.496 00:29:28.496 real 0m6.040s 00:29:28.496 user 0m23.532s 00:29:28.496 sys 0m1.445s 00:29:28.496 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:28.496 00:55:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.496 ************************************ 00:29:28.496 END TEST nvmf_host_management 00:29:28.496 ************************************ 00:29:28.755 00:55:31 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:29:28.755 00:55:31 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:28.755 00:55:31 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:28.755 00:55:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:28.755 ************************************ 00:29:28.755 START TEST nvmf_lvol 00:29:28.755 ************************************ 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:29:28.755 * Looking for test storage... 00:29:28.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:28.755 Cannot find device "nvmf_tgt_br" 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:28.755 Cannot find device "nvmf_tgt_br2" 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:28.755 Cannot find device "nvmf_tgt_br" 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:28.755 Cannot find device "nvmf_tgt_br2" 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:29:28.755 00:55:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:29.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:29.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:29.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:29:29.014 00:29:29.014 --- 10.0.0.2 ping statistics --- 00:29:29.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.014 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:29.014 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:29.014 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:29:29.014 00:29:29.014 --- 10.0.0.3 ping statistics --- 00:29:29.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.014 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:29.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:29:29.014 00:29:29.014 --- 10.0.0.1 ping statistics --- 00:29:29.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.014 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=88822 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 88822 00:29:29.014 00:55:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # '[' -z 88822 ']' 00:29:29.273 00:55:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.273 00:55:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:29.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.273 00:55:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.273 00:55:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:29.273 00:55:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:29.273 [2024-05-15 00:55:32.354896] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:29.273 [2024-05-15 00:55:32.354997] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.273 [2024-05-15 00:55:32.500835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:29.532 [2024-05-15 00:55:32.623667] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.532 [2024-05-15 00:55:32.624028] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.532 [2024-05-15 00:55:32.624228] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.532 [2024-05-15 00:55:32.625092] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.532 [2024-05-15 00:55:32.625237] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.532 [2024-05-15 00:55:32.625532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.532 [2024-05-15 00:55:32.625653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:29.532 [2024-05-15 00:55:32.625658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.468 00:55:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:30.468 00:55:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@861 -- # return 0 00:29:30.468 00:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:30.468 00:55:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:30.468 00:55:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:30.468 00:55:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.468 00:55:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:30.468 [2024-05-15 00:55:33.728774] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.727 00:55:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:30.985 00:55:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:30.985 00:55:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:31.243 00:55:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:31.243 00:55:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:31.501 00:55:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:31.760 00:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bc70f3da-5a4a-4951-ace9-8f55028ab412 00:29:31.760 00:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc70f3da-5a4a-4951-ace9-8f55028ab412 lvol 20 00:29:32.327 00:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7655982d-7692-473b-a3a1-1f0133ccbae9 00:29:32.327 00:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:32.327 00:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7655982d-7692-473b-a3a1-1f0133ccbae9 00:29:32.894 00:55:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:32.894 [2024-05-15 00:55:36.156573] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:32.894 [2024-05-15 00:55:36.156942] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.152 00:55:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:33.152 00:55:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=88970 00:29:33.152 00:55:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:33.152 00:55:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:34.527 00:55:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 7655982d-7692-473b-a3a1-1f0133ccbae9 MY_SNAPSHOT 00:29:34.527 00:55:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4143659a-9fd7-446b-a566-69f343e21518 00:29:34.527 00:55:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 7655982d-7692-473b-a3a1-1f0133ccbae9 30 00:29:35.095 00:55:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 4143659a-9fd7-446b-a566-69f343e21518 MY_CLONE 00:29:35.353 00:55:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c80a030f-1d41-4d22-a4a0-1f8048b9aa62 00:29:35.353 00:55:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate c80a030f-1d41-4d22-a4a0-1f8048b9aa62 00:29:35.919 00:55:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 88970 00:29:44.060 Initializing NVMe Controllers 00:29:44.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:44.060 Controller IO queue size 128, less than required. 00:29:44.060 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:44.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:44.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:44.060 Initialization complete. Launching workers. 00:29:44.060 ======================================================== 00:29:44.060 Latency(us) 00:29:44.060 Device Information : IOPS MiB/s Average min max 00:29:44.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10649.74 41.60 12019.73 1759.22 68162.26 00:29:44.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10539.74 41.17 12150.23 3544.65 64092.65 00:29:44.060 ======================================================== 00:29:44.060 Total : 21189.48 82.77 12084.64 1759.22 68162.26 00:29:44.060 00:29:44.060 00:55:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:44.060 00:55:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7655982d-7692-473b-a3a1-1f0133ccbae9 00:29:44.060 00:55:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc70f3da-5a4a-4951-ace9-8f55028ab412 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:44.319 rmmod nvme_tcp 00:29:44.319 rmmod nvme_fabrics 00:29:44.319 rmmod nvme_keyring 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 88822 ']' 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 88822 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' -z 88822 ']' 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # kill -0 88822 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # uname 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 88822 00:29:44.319 killing process with pid 88822 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # echo 'killing process with pid 88822' 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # kill 88822 00:29:44.319 [2024-05-15 00:55:47.602934] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:44.319 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@971 -- # wait 88822 00:29:44.577 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:44.577 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:44.577 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:44.577 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:44.577 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:44.577 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.577 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.577 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.837 00:55:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:44.837 ************************************ 00:29:44.837 END TEST nvmf_lvol 00:29:44.837 ************************************ 00:29:44.837 00:29:44.837 real 0m16.096s 00:29:44.837 user 1m7.005s 00:29:44.837 sys 0m4.045s 00:29:44.837 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:44.837 00:55:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:44.837 00:55:47 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:29:44.837 00:55:47 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:44.837 00:55:47 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:44.837 00:55:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:44.837 ************************************ 00:29:44.837 START TEST nvmf_lvs_grow 00:29:44.837 ************************************ 00:29:44.837 00:55:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:29:44.837 * Looking for test storage... 00:29:44.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:44.837 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:44.838 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:44.838 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:44.838 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:44.838 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:44.838 Cannot find device "nvmf_tgt_br" 00:29:44.838 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:29:44.838 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:44.838 Cannot find device "nvmf_tgt_br2" 00:29:44.838 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:29:44.838 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:44.838 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:44.838 Cannot find device "nvmf_tgt_br" 00:29:44.838 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:29:44.838 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:45.095 Cannot find device "nvmf_tgt_br2" 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:45.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:45.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:45.095 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:45.096 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:45.096 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:45.096 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:45.096 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:45.096 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:45.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:45.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:29:45.354 00:29:45.354 --- 10.0.0.2 ping statistics --- 00:29:45.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.354 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:45.354 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:45.354 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:29:45.354 00:29:45.354 --- 10.0.0.3 ping statistics --- 00:29:45.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.354 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:45.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:45.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:29:45.354 00:29:45.354 --- 10.0.0.1 ping statistics --- 00:29:45.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.354 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:45.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=89340 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 89340 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # '[' -z 89340 ']' 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:45.354 00:55:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:45.354 [2024-05-15 00:55:48.499635] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:45.354 [2024-05-15 00:55:48.499879] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.354 [2024-05-15 00:55:48.638778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.613 [2024-05-15 00:55:48.729327] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.613 [2024-05-15 00:55:48.729574] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.613 [2024-05-15 00:55:48.729751] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.613 [2024-05-15 00:55:48.729807] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.613 [2024-05-15 00:55:48.729905] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.613 [2024-05-15 00:55:48.729964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.549 00:55:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:46.549 00:55:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # return 0 00:29:46.549 00:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:46.549 00:55:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:46.549 00:55:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:46.549 00:55:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.549 00:55:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:46.549 [2024-05-15 00:55:49.834180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:46.808 ************************************ 00:29:46.808 START TEST lvs_grow_clean 00:29:46.808 ************************************ 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # lvs_grow 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:29:46.808 00:55:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:47.065 00:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:47.065 00:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:47.322 00:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=661c906b-e24d-4dc2-95f5-f2401a8982e3 00:29:47.322 00:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:47.322 00:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 661c906b-e24d-4dc2-95f5-f2401a8982e3 00:29:47.580 00:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:47.580 00:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:47.580 00:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 661c906b-e24d-4dc2-95f5-f2401a8982e3 lvol 150 00:29:47.840 00:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0de6dd61-4a4c-4e1a-8339-7840bd473473 00:29:47.840 00:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:29:47.841 00:55:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:48.100 [2024-05-15 00:55:51.246420] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:48.100 [2024-05-15 00:55:51.246514] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:48.100 true 00:29:48.100 00:55:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 661c906b-e24d-4dc2-95f5-f2401a8982e3 00:29:48.100 00:55:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:48.359 00:55:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:48.359 00:55:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:48.617 00:55:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0de6dd61-4a4c-4e1a-8339-7840bd473473 00:29:49.184 00:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:49.184 [2024-05-15 00:55:52.418909] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:49.184 [2024-05-15 00:55:52.419206] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.184 00:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:49.443 00:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:49.443 00:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=89508 00:29:49.443 00:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:49.443 00:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 89508 /var/tmp/bdevperf.sock 00:29:49.443 00:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # '[' -z 89508 ']' 00:29:49.443 00:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:49.443 00:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:49.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:49.443 00:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:49.443 00:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:49.443 00:55:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:49.703 [2024-05-15 00:55:52.730591] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:49.703 [2024-05-15 00:55:52.730696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89508 ] 00:29:49.703 [2024-05-15 00:55:52.869452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.703 [2024-05-15 00:55:52.962913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.639 00:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:50.639 00:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # return 0 00:29:50.639 00:55:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:50.897 Nvme0n1 00:29:50.897 00:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:51.156 [ 00:29:51.156 { 00:29:51.156 "aliases": [ 00:29:51.156 "0de6dd61-4a4c-4e1a-8339-7840bd473473" 00:29:51.156 ], 00:29:51.156 "assigned_rate_limits": { 00:29:51.156 "r_mbytes_per_sec": 0, 00:29:51.156 "rw_ios_per_sec": 0, 00:29:51.156 "rw_mbytes_per_sec": 0, 00:29:51.156 "w_mbytes_per_sec": 0 00:29:51.156 }, 00:29:51.156 "block_size": 4096, 00:29:51.156 "claimed": false, 00:29:51.156 "driver_specific": { 00:29:51.156 "mp_policy": "active_passive", 00:29:51.156 "nvme": [ 00:29:51.156 { 00:29:51.156 "ctrlr_data": { 00:29:51.156 "ana_reporting": false, 00:29:51.156 "cntlid": 1, 00:29:51.156 "firmware_revision": "24.05", 00:29:51.156 "model_number": "SPDK bdev Controller", 00:29:51.156 "multi_ctrlr": true, 00:29:51.156 "oacs": { 00:29:51.156 "firmware": 0, 00:29:51.156 "format": 0, 00:29:51.156 "ns_manage": 0, 00:29:51.156 "security": 0 00:29:51.156 }, 00:29:51.156 "serial_number": "SPDK0", 00:29:51.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:51.156 "vendor_id": "0x8086" 00:29:51.156 }, 00:29:51.156 "ns_data": { 00:29:51.156 "can_share": true, 00:29:51.156 "id": 1 00:29:51.156 }, 00:29:51.156 "trid": { 00:29:51.156 "adrfam": "IPv4", 00:29:51.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:51.156 "traddr": "10.0.0.2", 00:29:51.156 "trsvcid": "4420", 00:29:51.156 "trtype": "TCP" 00:29:51.156 }, 00:29:51.156 "vs": { 00:29:51.156 "nvme_version": "1.3" 00:29:51.156 } 00:29:51.156 } 00:29:51.156 ] 00:29:51.156 }, 00:29:51.156 "memory_domains": [ 00:29:51.156 { 00:29:51.156 "dma_device_id": "system", 00:29:51.156 "dma_device_type": 1 00:29:51.156 } 00:29:51.156 ], 00:29:51.156 "name": "Nvme0n1", 00:29:51.156 "num_blocks": 38912, 00:29:51.156 "product_name": "NVMe disk", 00:29:51.156 "supported_io_types": { 00:29:51.156 "abort": true, 00:29:51.156 "compare": true, 00:29:51.156 "compare_and_write": true, 00:29:51.156 "flush": true, 00:29:51.156 "nvme_admin": true, 00:29:51.156 "nvme_io": true, 00:29:51.156 "read": true, 00:29:51.156 "reset": true, 00:29:51.156 "unmap": true, 00:29:51.156 "write": true, 00:29:51.156 "write_zeroes": true 00:29:51.156 }, 00:29:51.156 "uuid": "0de6dd61-4a4c-4e1a-8339-7840bd473473", 00:29:51.156 "zoned": false 00:29:51.156 } 00:29:51.156 ] 00:29:51.156 00:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:51.156 00:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=89550 00:29:51.156 00:55:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:51.156 Running I/O for 10 seconds... 00:29:52.533 Latency(us) 00:29:52.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:52.533 Nvme0n1 : 1.00 8346.00 32.60 0.00 0.00 0.00 0.00 0.00 00:29:52.533 =================================================================================================================== 00:29:52.533 Total : 8346.00 32.60 0.00 0.00 0.00 0.00 0.00 00:29:52.533 00:29:53.100 00:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 661c906b-e24d-4dc2-95f5-f2401a8982e3 00:29:53.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.358 Nvme0n1 : 2.00 8231.00 32.15 0.00 0.00 0.00 0.00 0.00 00:29:53.358 =================================================================================================================== 00:29:53.358 Total : 8231.00 32.15 0.00 0.00 0.00 0.00 0.00 00:29:53.358 00:29:53.617 true 00:29:53.617 00:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 661c906b-e24d-4dc2-95f5-f2401a8982e3 00:29:53.617 00:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:53.875 00:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:53.875 00:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:53.875 00:55:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 89550 00:29:54.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:54.442 Nvme0n1 : 3.00 8279.33 32.34 0.00 0.00 0.00 0.00 0.00 00:29:54.442 =================================================================================================================== 00:29:54.442 Total : 8279.33 32.34 0.00 0.00 0.00 0.00 0.00 00:29:54.442 00:29:55.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:55.379 Nvme0n1 : 4.00 8287.00 32.37 0.00 0.00 0.00 0.00 0.00 00:29:55.379 =================================================================================================================== 00:29:55.379 Total : 8287.00 32.37 0.00 0.00 0.00 0.00 0.00 00:29:55.379 00:29:56.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:56.319 Nvme0n1 : 5.00 8323.60 32.51 0.00 0.00 0.00 0.00 0.00 00:29:56.319 =================================================================================================================== 00:29:56.319 Total : 8323.60 32.51 0.00 0.00 0.00 0.00 0.00 00:29:56.319 00:29:57.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.252 Nvme0n1 : 6.00 8330.33 32.54 0.00 0.00 0.00 0.00 0.00 00:29:57.252 =================================================================================================================== 00:29:57.252 Total : 8330.33 32.54 0.00 0.00 0.00 0.00 0.00 00:29:57.252 00:29:58.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.188 Nvme0n1 : 7.00 8328.14 32.53 0.00 0.00 0.00 0.00 0.00 00:29:58.188 =================================================================================================================== 00:29:58.188 Total : 8328.14 32.53 0.00 0.00 0.00 0.00 0.00 00:29:58.188 00:29:59.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:59.559 Nvme0n1 : 8.00 8310.25 32.46 0.00 0.00 0.00 0.00 0.00 00:29:59.559 =================================================================================================================== 00:29:59.559 Total : 8310.25 32.46 0.00 0.00 0.00 0.00 0.00 00:29:59.559 00:30:00.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.514 Nvme0n1 : 9.00 8273.89 32.32 0.00 0.00 0.00 0.00 0.00 00:30:00.514 =================================================================================================================== 00:30:00.514 Total : 8273.89 32.32 0.00 0.00 0.00 0.00 0.00 00:30:00.514 00:30:01.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:01.448 Nvme0n1 : 10.00 8214.40 32.09 0.00 0.00 0.00 0.00 0.00 00:30:01.448 =================================================================================================================== 00:30:01.448 Total : 8214.40 32.09 0.00 0.00 0.00 0.00 0.00 00:30:01.448 00:30:01.448 00:30:01.448 Latency(us) 00:30:01.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:01.448 Nvme0n1 : 10.01 8221.06 32.11 0.00 0.00 15564.13 7417.48 34555.35 00:30:01.448 =================================================================================================================== 00:30:01.448 Total : 8221.06 32.11 0.00 0.00 15564.13 7417.48 34555.35 00:30:01.448 0 00:30:01.448 00:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 89508 00:30:01.448 00:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' -z 89508 ']' 00:30:01.448 00:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # kill -0 89508 00:30:01.448 00:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # uname 00:30:01.448 00:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:01.448 00:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 89508 00:30:01.448 00:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:30:01.448 00:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:30:01.448 killing process with pid 89508 00:30:01.448 00:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 89508' 00:30:01.448 Received shutdown signal, test time was about 10.000000 seconds 00:30:01.448 00:30:01.448 Latency(us) 00:30:01.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.448 =================================================================================================================== 00:30:01.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:01.448 00:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # kill 89508 00:30:01.448 00:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # wait 89508 00:30:01.448 00:56:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:02.014 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:02.014 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 661c906b-e24d-4dc2-95f5-f2401a8982e3 00:30:02.014 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:02.579 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:02.579 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:02.579 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:02.579 [2024-05-15 00:56:05.861193] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:02.837 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 661c906b-e24d-4dc2-95f5-f2401a8982e3 00:30:02.837 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:30:02.837 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 661c906b-e24d-4dc2-95f5-f2401a8982e3 00:30:02.837 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.837 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:02.837 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.837 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:02.837 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.837 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:02.837 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.837 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:02.837 00:56:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 661c906b-e24d-4dc2-95f5-f2401a8982e3 00:30:03.094 2024/05/15 00:56:06 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:661c906b-e24d-4dc2-95f5-f2401a8982e3], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:30:03.094 request: 00:30:03.095 { 00:30:03.095 "method": "bdev_lvol_get_lvstores", 00:30:03.095 "params": { 00:30:03.095 "uuid": "661c906b-e24d-4dc2-95f5-f2401a8982e3" 00:30:03.095 } 00:30:03.095 } 00:30:03.095 Got JSON-RPC error response 00:30:03.095 GoRPCClient: error on JSON-RPC call 00:30:03.095 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:30:03.095 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:03.095 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:03.095 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:03.095 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:03.353 aio_bdev 00:30:03.353 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0de6dd61-4a4c-4e1a-8339-7840bd473473 00:30:03.353 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_name=0de6dd61-4a4c-4e1a-8339-7840bd473473 00:30:03.353 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:30:03.353 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local i 00:30:03.353 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:30:03.353 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:30:03.353 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:03.611 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0de6dd61-4a4c-4e1a-8339-7840bd473473 -t 2000 00:30:03.868 [ 00:30:03.868 { 00:30:03.868 "aliases": [ 00:30:03.868 "lvs/lvol" 00:30:03.868 ], 00:30:03.868 "assigned_rate_limits": { 00:30:03.868 "r_mbytes_per_sec": 0, 00:30:03.868 "rw_ios_per_sec": 0, 00:30:03.868 "rw_mbytes_per_sec": 0, 00:30:03.868 "w_mbytes_per_sec": 0 00:30:03.868 }, 00:30:03.868 "block_size": 4096, 00:30:03.868 "claimed": false, 00:30:03.868 "driver_specific": { 00:30:03.868 "lvol": { 00:30:03.868 "base_bdev": "aio_bdev", 00:30:03.868 "clone": false, 00:30:03.868 "esnap_clone": false, 00:30:03.868 "lvol_store_uuid": "661c906b-e24d-4dc2-95f5-f2401a8982e3", 00:30:03.868 "num_allocated_clusters": 38, 00:30:03.868 "snapshot": false, 00:30:03.868 "thin_provision": false 00:30:03.868 } 00:30:03.868 }, 00:30:03.868 "name": "0de6dd61-4a4c-4e1a-8339-7840bd473473", 00:30:03.868 "num_blocks": 38912, 00:30:03.868 "product_name": "Logical Volume", 00:30:03.868 "supported_io_types": { 00:30:03.868 "abort": false, 00:30:03.868 "compare": false, 00:30:03.868 "compare_and_write": false, 00:30:03.868 "flush": false, 00:30:03.868 "nvme_admin": false, 00:30:03.868 "nvme_io": false, 00:30:03.868 "read": true, 00:30:03.868 "reset": true, 00:30:03.868 "unmap": true, 00:30:03.868 "write": true, 00:30:03.868 "write_zeroes": true 00:30:03.868 }, 00:30:03.868 "uuid": "0de6dd61-4a4c-4e1a-8339-7840bd473473", 00:30:03.868 "zoned": false 00:30:03.868 } 00:30:03.868 ] 00:30:03.868 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # return 0 00:30:03.868 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 661c906b-e24d-4dc2-95f5-f2401a8982e3 00:30:03.868 00:56:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:04.126 00:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:04.126 00:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 661c906b-e24d-4dc2-95f5-f2401a8982e3 00:30:04.126 00:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:04.384 00:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:04.384 00:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0de6dd61-4a4c-4e1a-8339-7840bd473473 00:30:04.643 00:56:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 661c906b-e24d-4dc2-95f5-f2401a8982e3 00:30:04.901 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:05.159 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:30:05.418 00:30:05.418 real 0m18.811s 00:30:05.418 user 0m18.076s 00:30:05.418 sys 0m2.350s 00:30:05.418 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:05.418 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:05.418 ************************************ 00:30:05.418 END TEST lvs_grow_clean 00:30:05.418 ************************************ 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:05.677 ************************************ 00:30:05.677 START TEST lvs_grow_dirty 00:30:05.677 ************************************ 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # lvs_grow dirty 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:30:05.677 00:56:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:05.936 00:56:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:05.936 00:56:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:06.194 00:56:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:06.194 00:56:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:06.194 00:56:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:06.453 00:56:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:06.453 00:56:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:06.453 00:56:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3bd70834-df3d-4fab-ba4d-546594c5f866 lvol 150 00:30:06.711 00:56:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f951ac2e-1d25-4389-ba71-d3a829af58a0 00:30:06.711 00:56:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:30:06.711 00:56:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:06.970 [2024-05-15 00:56:10.177530] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:06.970 [2024-05-15 00:56:10.177624] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:06.970 true 00:30:06.970 00:56:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:06.970 00:56:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:07.230 00:56:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:07.230 00:56:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:07.488 00:56:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f951ac2e-1d25-4389-ba71-d3a829af58a0 00:30:07.747 00:56:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:08.013 [2024-05-15 00:56:11.174062] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.013 00:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:08.293 00:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=89948 00:30:08.293 00:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:08.293 00:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:08.293 00:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 89948 /var/tmp/bdevperf.sock 00:30:08.293 00:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 89948 ']' 00:30:08.293 00:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:08.293 00:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:08.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:08.293 00:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:08.293 00:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:08.293 00:56:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:08.293 [2024-05-15 00:56:11.481729] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:08.293 [2024-05-15 00:56:11.481832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89948 ] 00:30:08.551 [2024-05-15 00:56:11.615121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.551 [2024-05-15 00:56:11.703087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.485 00:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:09.485 00:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:30:09.485 00:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:09.485 Nvme0n1 00:30:09.485 00:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:09.744 [ 00:30:09.744 { 00:30:09.744 "aliases": [ 00:30:09.744 "f951ac2e-1d25-4389-ba71-d3a829af58a0" 00:30:09.744 ], 00:30:09.744 "assigned_rate_limits": { 00:30:09.744 "r_mbytes_per_sec": 0, 00:30:09.744 "rw_ios_per_sec": 0, 00:30:09.744 "rw_mbytes_per_sec": 0, 00:30:09.744 "w_mbytes_per_sec": 0 00:30:09.744 }, 00:30:09.744 "block_size": 4096, 00:30:09.744 "claimed": false, 00:30:09.744 "driver_specific": { 00:30:09.744 "mp_policy": "active_passive", 00:30:09.744 "nvme": [ 00:30:09.744 { 00:30:09.744 "ctrlr_data": { 00:30:09.744 "ana_reporting": false, 00:30:09.744 "cntlid": 1, 00:30:09.744 "firmware_revision": "24.05", 00:30:09.744 "model_number": "SPDK bdev Controller", 00:30:09.744 "multi_ctrlr": true, 00:30:09.744 "oacs": { 00:30:09.744 "firmware": 0, 00:30:09.744 "format": 0, 00:30:09.744 "ns_manage": 0, 00:30:09.744 "security": 0 00:30:09.744 }, 00:30:09.744 "serial_number": "SPDK0", 00:30:09.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:09.744 "vendor_id": "0x8086" 00:30:09.744 }, 00:30:09.744 "ns_data": { 00:30:09.744 "can_share": true, 00:30:09.744 "id": 1 00:30:09.744 }, 00:30:09.744 "trid": { 00:30:09.744 "adrfam": "IPv4", 00:30:09.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:09.744 "traddr": "10.0.0.2", 00:30:09.744 "trsvcid": "4420", 00:30:09.744 "trtype": "TCP" 00:30:09.744 }, 00:30:09.744 "vs": { 00:30:09.744 "nvme_version": "1.3" 00:30:09.744 } 00:30:09.744 } 00:30:09.744 ] 00:30:09.744 }, 00:30:09.744 "memory_domains": [ 00:30:09.744 { 00:30:09.744 "dma_device_id": "system", 00:30:09.744 "dma_device_type": 1 00:30:09.744 } 00:30:09.744 ], 00:30:09.744 "name": "Nvme0n1", 00:30:09.744 "num_blocks": 38912, 00:30:09.744 "product_name": "NVMe disk", 00:30:09.744 "supported_io_types": { 00:30:09.744 "abort": true, 00:30:09.744 "compare": true, 00:30:09.744 "compare_and_write": true, 00:30:09.744 "flush": true, 00:30:09.744 "nvme_admin": true, 00:30:09.744 "nvme_io": true, 00:30:09.744 "read": true, 00:30:09.744 "reset": true, 00:30:09.744 "unmap": true, 00:30:09.744 "write": true, 00:30:09.744 "write_zeroes": true 00:30:09.744 }, 00:30:09.744 "uuid": "f951ac2e-1d25-4389-ba71-d3a829af58a0", 00:30:09.744 "zoned": false 00:30:09.744 } 00:30:09.744 ] 00:30:09.744 00:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:09.744 00:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=89996 00:30:09.744 00:56:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:10.005 Running I/O for 10 seconds... 00:30:10.942 Latency(us) 00:30:10.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:10.942 Nvme0n1 : 1.00 8592.00 33.56 0.00 0.00 0.00 0.00 0.00 00:30:10.942 =================================================================================================================== 00:30:10.942 Total : 8592.00 33.56 0.00 0.00 0.00 0.00 0.00 00:30:10.942 00:30:11.879 00:56:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:11.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:11.879 Nvme0n1 : 2.00 8558.50 33.43 0.00 0.00 0.00 0.00 0.00 00:30:11.879 =================================================================================================================== 00:30:11.879 Total : 8558.50 33.43 0.00 0.00 0.00 0.00 0.00 00:30:11.879 00:30:12.137 true 00:30:12.137 00:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:12.137 00:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:12.395 00:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:12.395 00:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:12.395 00:56:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 89996 00:30:12.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:12.961 Nvme0n1 : 3.00 8557.00 33.43 0.00 0.00 0.00 0.00 0.00 00:30:12.961 =================================================================================================================== 00:30:12.961 Total : 8557.00 33.43 0.00 0.00 0.00 0.00 0.00 00:30:12.961 00:30:13.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.896 Nvme0n1 : 4.00 8524.25 33.30 0.00 0.00 0.00 0.00 0.00 00:30:13.896 =================================================================================================================== 00:30:13.896 Total : 8524.25 33.30 0.00 0.00 0.00 0.00 0.00 00:30:13.896 00:30:14.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:14.833 Nvme0n1 : 5.00 8496.20 33.19 0.00 0.00 0.00 0.00 0.00 00:30:14.833 =================================================================================================================== 00:30:14.833 Total : 8496.20 33.19 0.00 0.00 0.00 0.00 0.00 00:30:14.833 00:30:16.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:16.210 Nvme0n1 : 6.00 8307.50 32.45 0.00 0.00 0.00 0.00 0.00 00:30:16.210 =================================================================================================================== 00:30:16.210 Total : 8307.50 32.45 0.00 0.00 0.00 0.00 0.00 00:30:16.210 00:30:17.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:17.155 Nvme0n1 : 7.00 8196.29 32.02 0.00 0.00 0.00 0.00 0.00 00:30:17.155 =================================================================================================================== 00:30:17.155 Total : 8196.29 32.02 0.00 0.00 0.00 0.00 0.00 00:30:17.155 00:30:18.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.090 Nvme0n1 : 8.00 8132.50 31.77 0.00 0.00 0.00 0.00 0.00 00:30:18.090 =================================================================================================================== 00:30:18.090 Total : 8132.50 31.77 0.00 0.00 0.00 0.00 0.00 00:30:18.090 00:30:19.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.027 Nvme0n1 : 9.00 8081.11 31.57 0.00 0.00 0.00 0.00 0.00 00:30:19.027 =================================================================================================================== 00:30:19.027 Total : 8081.11 31.57 0.00 0.00 0.00 0.00 0.00 00:30:19.027 00:30:19.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.988 Nvme0n1 : 10.00 8092.10 31.61 0.00 0.00 0.00 0.00 0.00 00:30:19.988 =================================================================================================================== 00:30:19.988 Total : 8092.10 31.61 0.00 0.00 0.00 0.00 0.00 00:30:19.988 00:30:19.988 00:30:19.988 Latency(us) 00:30:19.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.988 Nvme0n1 : 10.01 8093.21 31.61 0.00 0.00 15809.16 5272.67 149660.39 00:30:19.988 =================================================================================================================== 00:30:19.988 Total : 8093.21 31.61 0.00 0.00 15809.16 5272.67 149660.39 00:30:19.988 0 00:30:19.988 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 89948 00:30:19.988 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' -z 89948 ']' 00:30:19.988 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # kill -0 89948 00:30:19.988 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # uname 00:30:19.988 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:19.988 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 89948 00:30:19.988 killing process with pid 89948 00:30:19.988 Received shutdown signal, test time was about 10.000000 seconds 00:30:19.988 00:30:19.988 Latency(us) 00:30:19.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.988 =================================================================================================================== 00:30:19.988 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:19.988 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:30:19.988 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:30:19.988 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # echo 'killing process with pid 89948' 00:30:19.988 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # kill 89948 00:30:19.988 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # wait 89948 00:30:20.260 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:20.522 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:20.780 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:20.780 00:56:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 89340 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 89340 00:30:21.039 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 89340 Killed "${NVMF_APP[@]}" "$@" 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=90159 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 90159 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 90159 ']' 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:21.039 00:56:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:21.039 [2024-05-15 00:56:24.259860] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:21.039 [2024-05-15 00:56:24.260744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.299 [2024-05-15 00:56:24.408499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.299 [2024-05-15 00:56:24.495852] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.299 [2024-05-15 00:56:24.495932] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.299 [2024-05-15 00:56:24.495959] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.299 [2024-05-15 00:56:24.495967] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.299 [2024-05-15 00:56:24.495974] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.299 [2024-05-15 00:56:24.496000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.252 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:22.252 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:30:22.252 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:22.252 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:22.252 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:22.252 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.252 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:22.252 [2024-05-15 00:56:25.518256] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:22.252 [2024-05-15 00:56:25.518546] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:22.252 [2024-05-15 00:56:25.518785] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:22.517 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:22.517 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f951ac2e-1d25-4389-ba71-d3a829af58a0 00:30:22.517 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=f951ac2e-1d25-4389-ba71-d3a829af58a0 00:30:22.517 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:30:22.517 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:30:22.517 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:30:22.517 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:30:22.517 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:22.775 00:56:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f951ac2e-1d25-4389-ba71-d3a829af58a0 -t 2000 00:30:22.775 [ 00:30:22.775 { 00:30:22.775 "aliases": [ 00:30:22.775 "lvs/lvol" 00:30:22.775 ], 00:30:22.775 "assigned_rate_limits": { 00:30:22.775 "r_mbytes_per_sec": 0, 00:30:22.775 "rw_ios_per_sec": 0, 00:30:22.775 "rw_mbytes_per_sec": 0, 00:30:22.775 "w_mbytes_per_sec": 0 00:30:22.775 }, 00:30:22.775 "block_size": 4096, 00:30:22.775 "claimed": false, 00:30:22.775 "driver_specific": { 00:30:22.775 "lvol": { 00:30:22.775 "base_bdev": "aio_bdev", 00:30:22.775 "clone": false, 00:30:22.775 "esnap_clone": false, 00:30:22.775 "lvol_store_uuid": "3bd70834-df3d-4fab-ba4d-546594c5f866", 00:30:22.775 "num_allocated_clusters": 38, 00:30:22.775 "snapshot": false, 00:30:22.775 "thin_provision": false 00:30:22.775 } 00:30:22.775 }, 00:30:22.775 "name": "f951ac2e-1d25-4389-ba71-d3a829af58a0", 00:30:22.775 "num_blocks": 38912, 00:30:22.775 "product_name": "Logical Volume", 00:30:22.775 "supported_io_types": { 00:30:22.775 "abort": false, 00:30:22.775 "compare": false, 00:30:22.775 "compare_and_write": false, 00:30:22.775 "flush": false, 00:30:22.775 "nvme_admin": false, 00:30:22.775 "nvme_io": false, 00:30:22.775 "read": true, 00:30:22.775 "reset": true, 00:30:22.775 "unmap": true, 00:30:22.775 "write": true, 00:30:22.775 "write_zeroes": true 00:30:22.775 }, 00:30:22.775 "uuid": "f951ac2e-1d25-4389-ba71-d3a829af58a0", 00:30:22.775 "zoned": false 00:30:22.775 } 00:30:22.775 ] 00:30:22.775 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:30:22.775 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:22.775 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:23.343 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:23.344 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:23.344 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:23.344 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:23.344 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:23.603 [2024-05-15 00:56:26.759638] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:23.603 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:23.603 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:30:23.603 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:23.603 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:23.603 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:23.603 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:23.603 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:23.603 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:23.603 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:23.603 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:23.603 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:23.603 00:56:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:23.862 2024/05/15 00:56:27 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:3bd70834-df3d-4fab-ba4d-546594c5f866], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:30:23.862 request: 00:30:23.862 { 00:30:23.862 "method": "bdev_lvol_get_lvstores", 00:30:23.862 "params": { 00:30:23.862 "uuid": "3bd70834-df3d-4fab-ba4d-546594c5f866" 00:30:23.862 } 00:30:23.862 } 00:30:23.862 Got JSON-RPC error response 00:30:23.862 GoRPCClient: error on JSON-RPC call 00:30:23.862 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:30:23.862 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:23.862 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:23.862 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:23.862 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:24.121 aio_bdev 00:30:24.121 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f951ac2e-1d25-4389-ba71-d3a829af58a0 00:30:24.121 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=f951ac2e-1d25-4389-ba71-d3a829af58a0 00:30:24.121 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:30:24.121 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:30:24.121 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:30:24.121 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:30:24.121 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:24.380 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f951ac2e-1d25-4389-ba71-d3a829af58a0 -t 2000 00:30:24.639 [ 00:30:24.639 { 00:30:24.639 "aliases": [ 00:30:24.639 "lvs/lvol" 00:30:24.639 ], 00:30:24.639 "assigned_rate_limits": { 00:30:24.639 "r_mbytes_per_sec": 0, 00:30:24.639 "rw_ios_per_sec": 0, 00:30:24.639 "rw_mbytes_per_sec": 0, 00:30:24.639 "w_mbytes_per_sec": 0 00:30:24.639 }, 00:30:24.639 "block_size": 4096, 00:30:24.639 "claimed": false, 00:30:24.639 "driver_specific": { 00:30:24.639 "lvol": { 00:30:24.639 "base_bdev": "aio_bdev", 00:30:24.639 "clone": false, 00:30:24.639 "esnap_clone": false, 00:30:24.639 "lvol_store_uuid": "3bd70834-df3d-4fab-ba4d-546594c5f866", 00:30:24.639 "num_allocated_clusters": 38, 00:30:24.639 "snapshot": false, 00:30:24.639 "thin_provision": false 00:30:24.639 } 00:30:24.639 }, 00:30:24.639 "name": "f951ac2e-1d25-4389-ba71-d3a829af58a0", 00:30:24.639 "num_blocks": 38912, 00:30:24.639 "product_name": "Logical Volume", 00:30:24.639 "supported_io_types": { 00:30:24.639 "abort": false, 00:30:24.639 "compare": false, 00:30:24.639 "compare_and_write": false, 00:30:24.639 "flush": false, 00:30:24.639 "nvme_admin": false, 00:30:24.639 "nvme_io": false, 00:30:24.639 "read": true, 00:30:24.639 "reset": true, 00:30:24.639 "unmap": true, 00:30:24.639 "write": true, 00:30:24.639 "write_zeroes": true 00:30:24.639 }, 00:30:24.639 "uuid": "f951ac2e-1d25-4389-ba71-d3a829af58a0", 00:30:24.639 "zoned": false 00:30:24.639 } 00:30:24.639 ] 00:30:24.639 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:30:24.639 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:24.639 00:56:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:24.898 00:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:24.898 00:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:24.898 00:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:25.163 00:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:25.163 00:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f951ac2e-1d25-4389-ba71-d3a829af58a0 00:30:25.422 00:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3bd70834-df3d-4fab-ba4d-546594c5f866 00:30:25.681 00:56:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:25.939 00:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:30:26.198 00:30:26.198 real 0m20.723s 00:30:26.198 user 0m43.765s 00:30:26.198 sys 0m8.024s 00:30:26.198 00:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:26.198 00:56:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:26.198 ************************************ 00:30:26.198 END TEST lvs_grow_dirty 00:30:26.198 ************************************ 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # type=--id 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # id=0 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # for n in $shm_files 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:26.457 nvmf_trace.0 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # return 0 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:26.457 00:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:26.457 rmmod nvme_tcp 00:30:26.457 rmmod nvme_fabrics 00:30:26.716 rmmod nvme_keyring 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 90159 ']' 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 90159 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' -z 90159 ']' 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # kill -0 90159 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # uname 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 90159 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:26.716 killing process with pid 90159 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # echo 'killing process with pid 90159' 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # kill 90159 00:30:26.716 00:56:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # wait 90159 00:30:26.976 00:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:26.976 00:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:26.976 00:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:26.976 00:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:26.976 00:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:26.976 00:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.976 00:56:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.976 00:56:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.976 00:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:26.976 00:30:26.976 real 0m42.102s 00:30:26.976 user 1m8.272s 00:30:26.976 sys 0m11.186s 00:30:26.976 00:56:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:26.976 00:56:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:26.976 ************************************ 00:30:26.976 END TEST nvmf_lvs_grow 00:30:26.976 ************************************ 00:30:26.976 00:56:30 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:30:26.976 00:56:30 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:26.976 00:56:30 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:26.976 00:56:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:26.976 ************************************ 00:30:26.976 START TEST nvmf_bdev_io_wait 00:30:26.976 ************************************ 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:30:26.976 * Looking for test storage... 00:30:26.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:26.976 Cannot find device "nvmf_tgt_br" 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:26.976 Cannot find device "nvmf_tgt_br2" 00:30:26.976 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:30:26.977 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:26.977 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:27.236 Cannot find device "nvmf_tgt_br" 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:27.236 Cannot find device "nvmf_tgt_br2" 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:27.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:27.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:27.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:30:27.236 00:30:27.236 --- 10.0.0.2 ping statistics --- 00:30:27.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.236 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:27.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:27.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:30:27.236 00:30:27.236 --- 10.0.0.3 ping statistics --- 00:30:27.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.236 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:27.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:30:27.236 00:30:27.236 --- 10.0.0.1 ping statistics --- 00:30:27.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.236 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:27.236 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.237 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:27.237 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:27.496 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:27.496 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:27.496 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:27.496 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:27.496 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=90577 00:30:27.496 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:27.496 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 90577 00:30:27.496 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # '[' -z 90577 ']' 00:30:27.496 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.496 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:27.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.496 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.496 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:27.496 00:56:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:27.496 [2024-05-15 00:56:30.587856] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:27.496 [2024-05-15 00:56:30.587970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.496 [2024-05-15 00:56:30.726666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:27.754 [2024-05-15 00:56:30.818720] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.754 [2024-05-15 00:56:30.818784] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.754 [2024-05-15 00:56:30.818796] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.754 [2024-05-15 00:56:30.818805] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.754 [2024-05-15 00:56:30.818813] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.754 [2024-05-15 00:56:30.819884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.754 [2024-05-15 00:56:30.819971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:27.754 [2024-05-15 00:56:30.820105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:27.754 [2024-05-15 00:56:30.820109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.321 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:28.321 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # return 0 00:30:28.321 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:28.321 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:28.321 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.321 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.321 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:28.321 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.321 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.321 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.321 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:28.321 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.321 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.581 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.581 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:28.581 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.582 [2024-05-15 00:56:31.625835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.582 Malloc0 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.582 [2024-05-15 00:56:31.679842] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:28.582 [2024-05-15 00:56:31.680353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=90630 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=90632 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=90634 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.582 { 00:30:28.582 "params": { 00:30:28.582 "name": "Nvme$subsystem", 00:30:28.582 "trtype": "$TEST_TRANSPORT", 00:30:28.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.582 "adrfam": "ipv4", 00:30:28.582 "trsvcid": "$NVMF_PORT", 00:30:28.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.582 "hdgst": ${hdgst:-false}, 00:30:28.582 "ddgst": ${ddgst:-false} 00:30:28.582 }, 00:30:28.582 "method": "bdev_nvme_attach_controller" 00:30:28.582 } 00:30:28.582 EOF 00:30:28.582 )") 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.582 { 00:30:28.582 "params": { 00:30:28.582 "name": "Nvme$subsystem", 00:30:28.582 "trtype": "$TEST_TRANSPORT", 00:30:28.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.582 "adrfam": "ipv4", 00:30:28.582 "trsvcid": "$NVMF_PORT", 00:30:28.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.582 "hdgst": ${hdgst:-false}, 00:30:28.582 "ddgst": ${ddgst:-false} 00:30:28.582 }, 00:30:28.582 "method": "bdev_nvme_attach_controller" 00:30:28.582 } 00:30:28.582 EOF 00:30:28.582 )") 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=90635 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.582 { 00:30:28.582 "params": { 00:30:28.582 "name": "Nvme$subsystem", 00:30:28.582 "trtype": "$TEST_TRANSPORT", 00:30:28.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.582 "adrfam": "ipv4", 00:30:28.582 "trsvcid": "$NVMF_PORT", 00:30:28.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.582 "hdgst": ${hdgst:-false}, 00:30:28.582 "ddgst": ${ddgst:-false} 00:30:28.582 }, 00:30:28.582 "method": "bdev_nvme_attach_controller" 00:30:28.582 } 00:30:28.582 EOF 00:30:28.582 )") 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:28.582 "params": { 00:30:28.582 "name": "Nvme1", 00:30:28.582 "trtype": "tcp", 00:30:28.582 "traddr": "10.0.0.2", 00:30:28.582 "adrfam": "ipv4", 00:30:28.582 "trsvcid": "4420", 00:30:28.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.582 "hdgst": false, 00:30:28.582 "ddgst": false 00:30:28.582 }, 00:30:28.582 "method": "bdev_nvme_attach_controller" 00:30:28.582 }' 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:28.582 "params": { 00:30:28.582 "name": "Nvme1", 00:30:28.582 "trtype": "tcp", 00:30:28.582 "traddr": "10.0.0.2", 00:30:28.582 "adrfam": "ipv4", 00:30:28.582 "trsvcid": "4420", 00:30:28.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.582 "hdgst": false, 00:30:28.582 "ddgst": false 00:30:28.582 }, 00:30:28.582 "method": "bdev_nvme_attach_controller" 00:30:28.582 }' 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.582 { 00:30:28.582 "params": { 00:30:28.582 "name": "Nvme$subsystem", 00:30:28.582 "trtype": "$TEST_TRANSPORT", 00:30:28.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.582 "adrfam": "ipv4", 00:30:28.582 "trsvcid": "$NVMF_PORT", 00:30:28.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.582 "hdgst": ${hdgst:-false}, 00:30:28.582 "ddgst": ${ddgst:-false} 00:30:28.582 }, 00:30:28.582 "method": "bdev_nvme_attach_controller" 00:30:28.582 } 00:30:28.582 EOF 00:30:28.582 )") 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:28.582 "params": { 00:30:28.582 "name": "Nvme1", 00:30:28.582 "trtype": "tcp", 00:30:28.582 "traddr": "10.0.0.2", 00:30:28.582 "adrfam": "ipv4", 00:30:28.582 "trsvcid": "4420", 00:30:28.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.582 "hdgst": false, 00:30:28.582 "ddgst": false 00:30:28.582 }, 00:30:28.582 "method": "bdev_nvme_attach_controller" 00:30:28.582 }' 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:30:28.582 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:28.582 "params": { 00:30:28.582 "name": "Nvme1", 00:30:28.582 "trtype": "tcp", 00:30:28.582 "traddr": "10.0.0.2", 00:30:28.582 "adrfam": "ipv4", 00:30:28.582 "trsvcid": "4420", 00:30:28.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.582 "hdgst": false, 00:30:28.582 "ddgst": false 00:30:28.582 }, 00:30:28.582 "method": "bdev_nvme_attach_controller" 00:30:28.582 }' 00:30:28.582 [2024-05-15 00:56:31.742179] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:28.583 [2024-05-15 00:56:31.742267] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:28.583 [2024-05-15 00:56:31.761235] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:28.583 [2024-05-15 00:56:31.761314] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:28.583 [2024-05-15 00:56:31.763106] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:28.583 [2024-05-15 00:56:31.763187] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:28.583 00:56:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 90630 00:30:28.583 [2024-05-15 00:56:31.773220] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:28.583 [2024-05-15 00:56:31.773281] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:28.841 [2024-05-15 00:56:31.960695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.841 [2024-05-15 00:56:32.032547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.841 [2024-05-15 00:56:32.034427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:28.841 [2024-05-15 00:56:32.100631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:28.841 [2024-05-15 00:56:32.113288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.130 [2024-05-15 00:56:32.185007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:29.130 [2024-05-15 00:56:32.185634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.130 Running I/O for 1 seconds... 00:30:29.130 Running I/O for 1 seconds... 00:30:29.130 [2024-05-15 00:56:32.257900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:29.130 Running I/O for 1 seconds... 00:30:29.416 Running I/O for 1 seconds... 00:30:29.983 00:30:29.983 Latency(us) 00:30:29.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.983 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:29.983 Nvme1n1 : 1.01 10159.13 39.68 0.00 0.00 12545.63 7238.75 19422.49 00:30:29.983 =================================================================================================================== 00:30:29.983 Total : 10159.13 39.68 0.00 0.00 12545.63 7238.75 19422.49 00:30:29.983 00:30:29.983 Latency(us) 00:30:29.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.983 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:29.983 Nvme1n1 : 1.00 196318.23 766.87 0.00 0.00 649.66 284.86 1295.83 00:30:29.983 =================================================================================================================== 00:30:29.983 Total : 196318.23 766.87 0.00 0.00 649.66 284.86 1295.83 00:30:30.242 00:30:30.242 Latency(us) 00:30:30.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.242 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:30.242 Nvme1n1 : 1.01 8861.10 34.61 0.00 0.00 14386.56 5004.57 21686.46 00:30:30.242 =================================================================================================================== 00:30:30.242 Total : 8861.10 34.61 0.00 0.00 14386.56 5004.57 21686.46 00:30:30.242 00:30:30.242 Latency(us) 00:30:30.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.242 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:30.242 Nvme1n1 : 1.01 7508.44 29.33 0.00 0.00 16955.74 9711.24 28597.53 00:30:30.242 =================================================================================================================== 00:30:30.242 Total : 7508.44 29.33 0.00 0.00 16955.74 9711.24 28597.53 00:30:30.242 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 90632 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 90634 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 90635 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:30.501 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:30.501 rmmod nvme_tcp 00:30:30.501 rmmod nvme_fabrics 00:30:30.760 rmmod nvme_keyring 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 90577 ']' 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 90577 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' -z 90577 ']' 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # kill -0 90577 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # uname 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 90577 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:30.760 killing process with pid 90577 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # echo 'killing process with pid 90577' 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # kill 90577 00:30:30.760 [2024-05-15 00:56:33.845502] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:30.760 00:56:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # wait 90577 00:30:31.020 00:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:31.020 00:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:31.020 00:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:31.020 00:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:31.020 00:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:31.020 00:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.020 00:56:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:31.020 00:56:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.020 00:56:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:31.020 00:30:31.020 real 0m4.000s 00:30:31.020 user 0m17.468s 00:30:31.020 sys 0m2.133s 00:30:31.020 00:56:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:31.020 00:56:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:31.020 ************************************ 00:30:31.020 END TEST nvmf_bdev_io_wait 00:30:31.020 ************************************ 00:30:31.020 00:56:34 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:30:31.020 00:56:34 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:31.020 00:56:34 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:31.020 00:56:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.020 ************************************ 00:30:31.020 START TEST nvmf_queue_depth 00:30:31.020 ************************************ 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:30:31.020 * Looking for test storage... 00:30:31.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:31.020 00:56:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:31.021 Cannot find device "nvmf_tgt_br" 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:31.021 Cannot find device "nvmf_tgt_br2" 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:31.021 Cannot find device "nvmf_tgt_br" 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:30:31.021 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:31.280 Cannot find device "nvmf_tgt_br2" 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:31.280 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:31.280 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:31.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:30:31.280 00:30:31.280 --- 10.0.0.2 ping statistics --- 00:30:31.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.280 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:31.280 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:31.280 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:30:31.280 00:30:31.280 --- 10.0.0.3 ping statistics --- 00:30:31.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.280 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:31.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:30:31.280 00:30:31.280 --- 10.0.0.1 ping statistics --- 00:30:31.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.280 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:31.280 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=90871 00:30:31.281 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 90871 00:30:31.281 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 90871 ']' 00:30:31.281 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:31.281 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.281 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:31.281 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.281 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:31.281 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:31.540 [2024-05-15 00:56:34.609055] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:31.540 [2024-05-15 00:56:34.609164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.540 [2024-05-15 00:56:34.746806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.540 [2024-05-15 00:56:34.824766] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.540 [2024-05-15 00:56:34.824838] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.540 [2024-05-15 00:56:34.824865] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.540 [2024-05-15 00:56:34.824874] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.540 [2024-05-15 00:56:34.824881] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.540 [2024-05-15 00:56:34.824913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.799 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:31.799 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:30:31.799 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:31.799 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:31.799 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:31.799 00:56:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.799 00:56:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:31.800 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.800 00:56:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:31.800 [2024-05-15 00:56:35.000823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:31.800 Malloc0 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:31.800 [2024-05-15 00:56:35.062940] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:31.800 [2024-05-15 00:56:35.063209] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=90902 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 90902 /var/tmp/bdevperf.sock 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 90902 ']' 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:31.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:31.800 00:56:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:32.058 [2024-05-15 00:56:35.131027] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:32.058 [2024-05-15 00:56:35.131162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90902 ] 00:30:32.058 [2024-05-15 00:56:35.267702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.317 [2024-05-15 00:56:35.353929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.885 00:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:32.885 00:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:30:32.885 00:56:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:32.885 00:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:32.885 00:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:33.144 NVMe0n1 00:30:33.144 00:56:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:33.144 00:56:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:33.144 Running I/O for 10 seconds... 00:30:43.133 00:30:43.133 Latency(us) 00:30:43.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.133 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:43.133 Verification LBA range: start 0x0 length 0x4000 00:30:43.133 NVMe0n1 : 10.07 8546.06 33.38 0.00 0.00 119330.21 15252.01 80549.70 00:30:43.133 =================================================================================================================== 00:30:43.133 Total : 8546.06 33.38 0.00 0.00 119330.21 15252.01 80549.70 00:30:43.133 0 00:30:43.133 00:56:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 90902 00:30:43.133 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 90902 ']' 00:30:43.133 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 90902 00:30:43.133 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:30:43.391 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:43.391 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 90902 00:30:43.391 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:43.391 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:43.391 killing process with pid 90902 00:30:43.391 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 90902' 00:30:43.391 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 90902 00:30:43.391 Received shutdown signal, test time was about 10.000000 seconds 00:30:43.391 00:30:43.391 Latency(us) 00:30:43.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.391 =================================================================================================================== 00:30:43.391 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:43.391 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 90902 00:30:43.391 00:56:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:43.391 00:56:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:43.391 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:43.391 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:30:43.649 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:43.649 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:43.650 rmmod nvme_tcp 00:30:43.650 rmmod nvme_fabrics 00:30:43.650 rmmod nvme_keyring 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 90871 ']' 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 90871 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 90871 ']' 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 90871 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 90871 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:30:43.650 killing process with pid 90871 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 90871' 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 90871 00:30:43.650 [2024-05-15 00:56:46.774058] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:43.650 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 90871 00:30:43.918 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:43.918 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:43.918 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:43.918 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:43.918 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:43.918 00:56:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.918 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:43.918 00:56:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.918 00:56:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:43.918 00:30:43.918 real 0m12.892s 00:30:43.918 user 0m22.669s 00:30:43.918 sys 0m2.021s 00:30:43.918 00:56:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:43.918 00:56:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:43.918 ************************************ 00:30:43.918 END TEST nvmf_queue_depth 00:30:43.918 ************************************ 00:30:43.918 00:56:47 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:30:43.918 00:56:47 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:43.918 00:56:47 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:43.918 00:56:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.918 ************************************ 00:30:43.918 START TEST nvmf_target_multipath 00:30:43.918 ************************************ 00:30:43.918 00:56:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:30:43.918 * Looking for test storage... 00:30:43.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:43.918 00:56:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:43.918 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:43.918 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.918 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.918 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.918 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.918 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.918 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.918 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.918 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:43.919 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:44.178 Cannot find device "nvmf_tgt_br" 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:44.178 Cannot find device "nvmf_tgt_br2" 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:44.178 Cannot find device "nvmf_tgt_br" 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:44.178 Cannot find device "nvmf_tgt_br2" 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:44.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:44.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:44.178 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:44.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:30:44.436 00:30:44.436 --- 10.0.0.2 ping statistics --- 00:30:44.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.436 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:44.436 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:44.436 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:30:44.436 00:30:44.436 --- 10.0.0.3 ping statistics --- 00:30:44.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.436 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:44.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:30:44.436 00:30:44.436 --- 10.0.0.1 ping statistics --- 00:30:44.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.436 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=91233 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 91233 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@828 -- # '[' -z 91233 ']' 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.436 00:56:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:44.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.437 00:56:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:44.437 00:56:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:44.437 [2024-05-15 00:56:47.584647] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:44.437 [2024-05-15 00:56:47.584736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.694 [2024-05-15 00:56:47.726671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:44.694 [2024-05-15 00:56:47.827203] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.694 [2024-05-15 00:56:47.827492] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.694 [2024-05-15 00:56:47.827691] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.694 [2024-05-15 00:56:47.827851] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.694 [2024-05-15 00:56:47.827895] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.694 [2024-05-15 00:56:47.828101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.694 [2024-05-15 00:56:47.828219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.694 [2024-05-15 00:56:47.828781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.694 [2024-05-15 00:56:47.828789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.259 00:56:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:45.259 00:56:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@861 -- # return 0 00:30:45.259 00:56:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:45.259 00:56:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:45.259 00:56:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:45.517 00:56:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.517 00:56:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:45.517 [2024-05-15 00:56:48.763875] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.517 00:56:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:46.082 Malloc0 00:30:46.082 00:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:30:46.082 00:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:46.340 00:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.599 [2024-05-15 00:56:49.772427] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:46.599 [2024-05-15 00:56:49.772732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.599 00:56:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:46.857 [2024-05-15 00:56:49.996889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:46.857 00:56:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:30:47.115 00:56:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:30:47.373 00:56:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:30:47.373 00:56:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local i=0 00:30:47.373 00:56:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:30:47.373 00:56:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:30:47.373 00:56:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # sleep 2 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # return 0 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=91371 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:30:49.274 00:56:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:30:49.274 [global] 00:30:49.274 thread=1 00:30:49.274 invalidate=1 00:30:49.274 rw=randrw 00:30:49.274 time_based=1 00:30:49.274 runtime=6 00:30:49.274 ioengine=libaio 00:30:49.274 direct=1 00:30:49.274 bs=4096 00:30:49.274 iodepth=128 00:30:49.274 norandommap=0 00:30:49.274 numjobs=1 00:30:49.274 00:30:49.274 verify_dump=1 00:30:49.274 verify_backlog=512 00:30:49.274 verify_state_save=0 00:30:49.274 do_verify=1 00:30:49.274 verify=crc32c-intel 00:30:49.274 [job0] 00:30:49.274 filename=/dev/nvme0n1 00:30:49.274 Could not set queue depth (nvme0n1) 00:30:49.533 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:49.533 fio-3.35 00:30:49.533 Starting 1 thread 00:30:50.468 00:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:50.726 00:56:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:30:50.726 00:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:30:50.726 00:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:30:50.726 00:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:50.726 00:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:30:50.726 00:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:30:50.726 00:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:30:50.726 00:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:30:50.726 00:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:30:50.726 00:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:50.726 00:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:30:50.726 00:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:50.726 00:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:30:50.726 00:56:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:30:52.103 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:30:52.103 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:52.103 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:30:52.103 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:52.103 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:30:52.362 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:30:52.362 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:30:52.362 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:52.362 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:30:52.362 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:30:52.362 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:30:52.362 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:30:52.362 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:30:52.362 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:52.362 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:30:52.362 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:52.362 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:30:52.362 00:56:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:30:53.299 00:56:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:30:53.299 00:56:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:53.299 00:56:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:30:53.299 00:56:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 91371 00:30:55.830 00:30:55.831 job0: (groupid=0, jobs=1): err= 0: pid=91396: Wed May 15 00:56:58 2024 00:30:55.831 read: IOPS=10.8k, BW=42.1MiB/s (44.2MB/s)(253MiB/6006msec) 00:30:55.831 slat (usec): min=4, max=5577, avg=53.22, stdev=234.81 00:30:55.831 clat (usec): min=1487, max=14556, avg=7995.63, stdev=1187.05 00:30:55.831 lat (usec): min=1517, max=14566, avg=8048.85, stdev=1196.30 00:30:55.831 clat percentiles (usec): 00:30:55.831 | 1.00th=[ 4817], 5.00th=[ 6128], 10.00th=[ 6915], 20.00th=[ 7308], 00:30:55.831 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8094], 00:30:55.831 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[10028], 00:30:55.831 | 99.00th=[11731], 99.50th=[12125], 99.90th=[13042], 99.95th=[13304], 00:30:55.831 | 99.99th=[14484] 00:30:55.831 bw ( KiB/s): min=14856, max=29320, per=54.53%, avg=23510.00, stdev=4392.40, samples=12 00:30:55.831 iops : min= 3714, max= 7330, avg=5877.50, stdev=1098.10, samples=12 00:30:55.831 write: IOPS=6332, BW=24.7MiB/s (25.9MB/s)(138MiB/5560msec); 0 zone resets 00:30:55.831 slat (usec): min=12, max=2874, avg=63.28, stdev=161.78 00:30:55.831 clat (usec): min=751, max=14393, avg=6928.91, stdev=980.10 00:30:55.831 lat (usec): min=793, max=14730, avg=6992.20, stdev=984.04 00:30:55.831 clat percentiles (usec): 00:30:55.831 | 1.00th=[ 3949], 5.00th=[ 5080], 10.00th=[ 5932], 20.00th=[ 6390], 00:30:55.831 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7177], 00:30:55.831 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7832], 95.00th=[ 8160], 00:30:55.831 | 99.00th=[10028], 99.50th=[10552], 99.90th=[12387], 99.95th=[12911], 00:30:55.831 | 99.99th=[13304] 00:30:55.831 bw ( KiB/s): min=15600, max=28664, per=92.52%, avg=23436.00, stdev=4000.93, samples=12 00:30:55.831 iops : min= 3900, max= 7166, avg=5859.00, stdev=1000.23, samples=12 00:30:55.831 lat (usec) : 1000=0.01% 00:30:55.831 lat (msec) : 2=0.01%, 4=0.49%, 10=95.79%, 20=3.71% 00:30:55.831 cpu : usr=5.56%, sys=22.01%, ctx=6548, majf=0, minf=100 00:30:55.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:30:55.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:55.831 issued rwts: total=64741,35210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:55.831 00:30:55.831 Run status group 0 (all jobs): 00:30:55.831 READ: bw=42.1MiB/s (44.2MB/s), 42.1MiB/s-42.1MiB/s (44.2MB/s-44.2MB/s), io=253MiB (265MB), run=6006-6006msec 00:30:55.831 WRITE: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=138MiB (144MB), run=5560-5560msec 00:30:55.831 00:30:55.831 Disk stats (read/write): 00:30:55.831 nvme0n1: ios=63943/34322, merge=0/0, ticks=481348/222837, in_queue=704185, util=98.65% 00:30:55.831 00:56:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:55.831 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:30:56.089 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:30:56.089 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:30:56.089 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:56.089 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:30:56.089 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:30:56.089 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:30:56.089 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:30:56.089 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:30:56.089 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:56.090 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:30:56.090 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:56.090 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:30:56.090 00:56:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:30:57.465 00:57:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:30:57.465 00:57:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:57.465 00:57:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:30:57.465 00:57:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:30:57.465 00:57:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=91523 00:30:57.465 00:57:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:30:57.465 00:57:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:30:57.465 [global] 00:30:57.465 thread=1 00:30:57.465 invalidate=1 00:30:57.465 rw=randrw 00:30:57.465 time_based=1 00:30:57.465 runtime=6 00:30:57.465 ioengine=libaio 00:30:57.465 direct=1 00:30:57.465 bs=4096 00:30:57.465 iodepth=128 00:30:57.465 norandommap=0 00:30:57.465 numjobs=1 00:30:57.465 00:30:57.465 verify_dump=1 00:30:57.465 verify_backlog=512 00:30:57.465 verify_state_save=0 00:30:57.465 do_verify=1 00:30:57.465 verify=crc32c-intel 00:30:57.465 [job0] 00:30:57.465 filename=/dev/nvme0n1 00:30:57.465 Could not set queue depth (nvme0n1) 00:30:57.465 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:57.465 fio-3.35 00:30:57.465 Starting 1 thread 00:30:58.399 00:57:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:58.399 00:57:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:30:58.963 00:57:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:30:58.963 00:57:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:30:58.963 00:57:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:58.963 00:57:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:30:58.963 00:57:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:30:58.964 00:57:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:30:58.964 00:57:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:30:58.964 00:57:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:30:58.964 00:57:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:58.964 00:57:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:30:58.964 00:57:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:58.964 00:57:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:30:58.964 00:57:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:30:59.895 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:30:59.895 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:59.895 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:30:59.895 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:00.153 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:31:00.412 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:31:00.412 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:31:00.412 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:00.412 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:31:00.412 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:31:00.412 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:31:00.412 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:31:00.412 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:31:00.412 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:31:00.412 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:31:00.412 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:31:00.412 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:31:00.412 00:57:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:31:01.350 00:57:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:31:01.350 00:57:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:31:01.350 00:57:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:31:01.350 00:57:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 91523 00:31:03.882 00:31:03.882 job0: (groupid=0, jobs=1): err= 0: pid=91544: Wed May 15 00:57:06 2024 00:31:03.882 read: IOPS=12.2k, BW=47.5MiB/s (49.8MB/s)(285MiB/6003msec) 00:31:03.882 slat (usec): min=2, max=5230, avg=42.76, stdev=208.92 00:31:03.882 clat (usec): min=345, max=13792, avg=7316.67, stdev=1587.40 00:31:03.882 lat (usec): min=369, max=13808, avg=7359.43, stdev=1605.64 00:31:03.882 clat percentiles (usec): 00:31:03.882 | 1.00th=[ 3556], 5.00th=[ 4490], 10.00th=[ 5080], 20.00th=[ 5932], 00:31:03.882 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7701], 00:31:03.882 | 70.00th=[ 8029], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[ 9634], 00:31:03.882 | 99.00th=[11600], 99.50th=[11863], 99.90th=[12649], 99.95th=[13042], 00:31:03.882 | 99.99th=[13566] 00:31:03.882 bw ( KiB/s): min= 5496, max=43408, per=52.44%, avg=25503.27, stdev=10708.13, samples=11 00:31:03.882 iops : min= 1374, max=10852, avg=6375.82, stdev=2677.03, samples=11 00:31:03.882 write: IOPS=7218, BW=28.2MiB/s (29.6MB/s)(148MiB/5247msec); 0 zone resets 00:31:03.882 slat (usec): min=4, max=3756, avg=52.48, stdev=134.02 00:31:03.882 clat (usec): min=451, max=13510, avg=6015.59, stdev=1582.01 00:31:03.882 lat (usec): min=497, max=13531, avg=6068.07, stdev=1595.91 00:31:03.882 clat percentiles (usec): 00:31:03.882 | 1.00th=[ 2507], 5.00th=[ 3261], 10.00th=[ 3720], 20.00th=[ 4359], 00:31:03.882 | 30.00th=[ 5080], 40.00th=[ 6063], 50.00th=[ 6456], 60.00th=[ 6783], 00:31:03.882 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7635], 95.00th=[ 7898], 00:31:03.882 | 99.00th=[ 9372], 99.50th=[10290], 99.90th=[12125], 99.95th=[12387], 00:31:03.882 | 99.99th=[13304] 00:31:03.882 bw ( KiB/s): min= 5776, max=42576, per=88.35%, avg=25511.27, stdev=10471.43, samples=11 00:31:03.882 iops : min= 1444, max=10644, avg=6377.82, stdev=2617.86, samples=11 00:31:03.882 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:31:03.882 lat (msec) : 2=0.18%, 4=6.12%, 10=91.04%, 20=2.62% 00:31:03.882 cpu : usr=6.05%, sys=24.70%, ctx=7556, majf=0, minf=121 00:31:03.882 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:31:03.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:03.882 issued rwts: total=72987,37875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.882 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:03.882 00:31:03.882 Run status group 0 (all jobs): 00:31:03.882 READ: bw=47.5MiB/s (49.8MB/s), 47.5MiB/s-47.5MiB/s (49.8MB/s-49.8MB/s), io=285MiB (299MB), run=6003-6003msec 00:31:03.882 WRITE: bw=28.2MiB/s (29.6MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=148MiB (155MB), run=5247-5247msec 00:31:03.882 00:31:03.882 Disk stats (read/write): 00:31:03.882 nvme0n1: ios=71886/37584, merge=0/0, ticks=488919/207498, in_queue=696417, util=98.63% 00:31:03.882 00:57:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:03.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:03.882 00:57:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:03.882 00:57:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # local i=0 00:31:03.882 00:57:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:31:03.882 00:57:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:03.882 00:57:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:31:03.882 00:57:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:03.882 00:57:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1228 -- # return 0 00:31:03.882 00:57:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:03.882 rmmod nvme_tcp 00:31:03.882 rmmod nvme_fabrics 00:31:03.882 rmmod nvme_keyring 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 91233 ']' 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 91233 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@947 -- # '[' -z 91233 ']' 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # kill -0 91233 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # uname 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 91233 00:31:03.882 killing process with pid 91233 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # echo 'killing process with pid 91233' 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # kill 91233 00:31:03.882 [2024-05-15 00:57:07.155450] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:03.882 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@971 -- # wait 91233 00:31:04.141 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:04.141 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:04.141 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:04.141 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:04.141 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:04.141 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.141 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:04.141 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.141 00:57:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:04.400 00:31:04.400 real 0m20.346s 00:31:04.400 user 1m19.961s 00:31:04.400 sys 0m6.496s 00:31:04.400 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:04.400 00:57:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:04.400 ************************************ 00:31:04.400 END TEST nvmf_target_multipath 00:31:04.400 ************************************ 00:31:04.400 00:57:07 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:31:04.400 00:57:07 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:31:04.400 00:57:07 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:04.400 00:57:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:04.400 ************************************ 00:31:04.400 START TEST nvmf_zcopy 00:31:04.400 ************************************ 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:31:04.400 * Looking for test storage... 00:31:04.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:31:04.400 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:04.401 Cannot find device "nvmf_tgt_br" 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:04.401 Cannot find device "nvmf_tgt_br2" 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:04.401 Cannot find device "nvmf_tgt_br" 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:04.401 Cannot find device "nvmf_tgt_br2" 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:31:04.401 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:04.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:04.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:04.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:31:04.660 00:31:04.660 --- 10.0.0.2 ping statistics --- 00:31:04.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.660 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:31:04.660 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:04.660 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:04.660 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:31:04.660 00:31:04.660 --- 10.0.0.3 ping statistics --- 00:31:04.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.660 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:04.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:31:04.661 00:31:04.661 --- 10.0.0.1 ping statistics --- 00:31:04.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.661 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=91826 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 91826 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # '[' -z 91826 ']' 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:04.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:04.661 00:57:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:04.920 [2024-05-15 00:57:07.977833] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:04.920 [2024-05-15 00:57:07.977942] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.920 [2024-05-15 00:57:08.120517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.920 [2024-05-15 00:57:08.190498] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.920 [2024-05-15 00:57:08.190564] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.920 [2024-05-15 00:57:08.190575] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.920 [2024-05-15 00:57:08.190583] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.920 [2024-05-15 00:57:08.190589] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.920 [2024-05-15 00:57:08.190628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.238 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:05.238 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@861 -- # return 0 00:31:05.238 00:57:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:05.238 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:05.238 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.238 00:57:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.238 00:57:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:05.238 00:57:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:05.238 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:05.238 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.238 [2024-05-15 00:57:08.357178] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.238 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.239 [2024-05-15 00:57:08.373074] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:05.239 [2024-05-15 00:57:08.373289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.239 malloc0 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:05.239 { 00:31:05.239 "params": { 00:31:05.239 "name": "Nvme$subsystem", 00:31:05.239 "trtype": "$TEST_TRANSPORT", 00:31:05.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.239 "adrfam": "ipv4", 00:31:05.239 "trsvcid": "$NVMF_PORT", 00:31:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.239 "hdgst": ${hdgst:-false}, 00:31:05.239 "ddgst": ${ddgst:-false} 00:31:05.239 }, 00:31:05.239 "method": "bdev_nvme_attach_controller" 00:31:05.239 } 00:31:05.239 EOF 00:31:05.239 )") 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:31:05.239 00:57:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:05.239 "params": { 00:31:05.239 "name": "Nvme1", 00:31:05.239 "trtype": "tcp", 00:31:05.239 "traddr": "10.0.0.2", 00:31:05.239 "adrfam": "ipv4", 00:31:05.239 "trsvcid": "4420", 00:31:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:05.239 "hdgst": false, 00:31:05.239 "ddgst": false 00:31:05.239 }, 00:31:05.239 "method": "bdev_nvme_attach_controller" 00:31:05.239 }' 00:31:05.239 [2024-05-15 00:57:08.472164] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:05.239 [2024-05-15 00:57:08.472288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91864 ] 00:31:05.513 [2024-05-15 00:57:08.615814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.513 [2024-05-15 00:57:08.718121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.771 Running I/O for 10 seconds... 00:31:15.742 00:31:15.742 Latency(us) 00:31:15.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:15.742 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:15.742 Verification LBA range: start 0x0 length 0x1000 00:31:15.742 Nvme1n1 : 10.01 6107.46 47.71 0.00 0.00 20889.38 1474.56 34793.66 00:31:15.742 =================================================================================================================== 00:31:15.742 Total : 6107.46 47.71 0.00 0.00 20889.38 1474.56 34793.66 00:31:16.000 00:57:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=91981 00:31:16.000 00:57:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:16.000 00:57:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:16.000 00:57:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:16.000 00:57:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:16.000 00:57:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:31:16.000 00:57:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:31:16.000 00:57:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:16.000 00:57:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:16.000 { 00:31:16.000 "params": { 00:31:16.000 "name": "Nvme$subsystem", 00:31:16.000 "trtype": "$TEST_TRANSPORT", 00:31:16.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.000 "adrfam": "ipv4", 00:31:16.000 "trsvcid": "$NVMF_PORT", 00:31:16.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.000 "hdgst": ${hdgst:-false}, 00:31:16.000 "ddgst": ${ddgst:-false} 00:31:16.000 }, 00:31:16.000 "method": "bdev_nvme_attach_controller" 00:31:16.000 } 00:31:16.000 EOF 00:31:16.000 )") 00:31:16.000 00:57:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:31:16.000 [2024-05-15 00:57:19.217086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.000 [2024-05-15 00:57:19.217128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.000 00:57:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:31:16.000 00:57:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:31:16.000 00:57:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:16.000 "params": { 00:31:16.000 "name": "Nvme1", 00:31:16.000 "trtype": "tcp", 00:31:16.000 "traddr": "10.0.0.2", 00:31:16.000 "adrfam": "ipv4", 00:31:16.000 "trsvcid": "4420", 00:31:16.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:16.000 "hdgst": false, 00:31:16.000 "ddgst": false 00:31:16.000 }, 00:31:16.000 "method": "bdev_nvme_attach_controller" 00:31:16.000 }' 00:31:16.000 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.000 [2024-05-15 00:57:19.229008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.000 [2024-05-15 00:57:19.229033] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.000 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.000 [2024-05-15 00:57:19.241014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.000 [2024-05-15 00:57:19.241038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.000 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.000 [2024-05-15 00:57:19.253009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.000 [2024-05-15 00:57:19.253034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.000 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.000 [2024-05-15 00:57:19.264785] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:16.000 [2024-05-15 00:57:19.264862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91981 ] 00:31:16.000 [2024-05-15 00:57:19.265014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.000 [2024-05-15 00:57:19.265034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.000 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.000 [2024-05-15 00:57:19.277027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.000 [2024-05-15 00:57:19.277049] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.000 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.259 [2024-05-15 00:57:19.289024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.259 [2024-05-15 00:57:19.289049] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.259 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.259 [2024-05-15 00:57:19.301065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.259 [2024-05-15 00:57:19.301086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.259 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.259 [2024-05-15 00:57:19.313053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.259 [2024-05-15 00:57:19.313091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.259 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.259 [2024-05-15 00:57:19.325047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.259 [2024-05-15 00:57:19.325068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.259 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.259 [2024-05-15 00:57:19.337094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.259 [2024-05-15 00:57:19.337114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.259 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.259 [2024-05-15 00:57:19.349083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.259 [2024-05-15 00:57:19.349105] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.259 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.259 [2024-05-15 00:57:19.361070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.259 [2024-05-15 00:57:19.361098] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.259 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.259 [2024-05-15 00:57:19.373109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.259 [2024-05-15 00:57:19.373130] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.259 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.259 [2024-05-15 00:57:19.385094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.259 [2024-05-15 00:57:19.385117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.259 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.397117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.397143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 [2024-05-15 00:57:19.401110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.409099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.409121] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.421118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.421141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.433107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.433141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.445124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.445161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.457127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.457164] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.465121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.465158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.473130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.473165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.485164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.485201] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.493128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.493163] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.501130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.501167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.509145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.509178] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.521174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.521214] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.526204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.260 [2024-05-15 00:57:19.533155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.533195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.260 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.260 [2024-05-15 00:57:19.545177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.260 [2024-05-15 00:57:19.545231] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.519 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.519 [2024-05-15 00:57:19.553171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.519 [2024-05-15 00:57:19.553213] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.519 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.519 [2024-05-15 00:57:19.565165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.519 [2024-05-15 00:57:19.565194] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.519 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.519 [2024-05-15 00:57:19.577184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.519 [2024-05-15 00:57:19.577215] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.519 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.589178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.589209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.597181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.597215] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.605170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.605198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.613226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.613254] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.625215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.625248] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.633191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.633219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.641191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.641217] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.649194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.649220] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.657203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.657231] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.665224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.665257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.677236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.677267] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.685230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.685261] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.693239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.693272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.701245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.701275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.709252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.709283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.717246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.717271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.725268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.725306] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 Running I/O for 5 seconds... 00:31:16.520 [2024-05-15 00:57:19.733282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.733316] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.747027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.747089] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.762841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.762892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.779829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.779865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.520 [2024-05-15 00:57:19.796417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.520 [2024-05-15 00:57:19.796465] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.520 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:19.807107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:19.807165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:19.821990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:19.822048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:19.837854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:19.837914] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:19.854643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:19.854685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:19.870974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:19.871027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:19.887696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:19.887739] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:19.905762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:19.905802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:19.922567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:19.922621] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:19.937536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:19.937574] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:19.954964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:19.955038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:19.971552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:19.971605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:19.982389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:19.982422] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:19.997796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:19.997848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:20.007854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:20.007888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:20.019307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:20.019340] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:20.034290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:20.034341] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:16.780 [2024-05-15 00:57:20.050894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.780 [2024-05-15 00:57:20.050945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.780 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.039 [2024-05-15 00:57:20.066583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.039 [2024-05-15 00:57:20.066633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.039 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.039 [2024-05-15 00:57:20.083945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.039 [2024-05-15 00:57:20.083985] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.039 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.039 [2024-05-15 00:57:20.101523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.039 [2024-05-15 00:57:20.101587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.039 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.039 [2024-05-15 00:57:20.120098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.039 [2024-05-15 00:57:20.120141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.039 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.039 [2024-05-15 00:57:20.137063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.039 [2024-05-15 00:57:20.137099] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.039 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.039 [2024-05-15 00:57:20.148242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.039 [2024-05-15 00:57:20.148276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.039 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.039 [2024-05-15 00:57:20.159254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.039 [2024-05-15 00:57:20.159294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.039 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.039 [2024-05-15 00:57:20.173963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.039 [2024-05-15 00:57:20.174002] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.040 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.040 [2024-05-15 00:57:20.184923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.040 [2024-05-15 00:57:20.184960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.040 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.040 [2024-05-15 00:57:20.196175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.040 [2024-05-15 00:57:20.196225] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.040 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.040 [2024-05-15 00:57:20.209380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.040 [2024-05-15 00:57:20.209414] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.040 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.040 [2024-05-15 00:57:20.226335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.040 [2024-05-15 00:57:20.226379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.040 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.040 [2024-05-15 00:57:20.241946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.040 [2024-05-15 00:57:20.241987] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.040 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.040 [2024-05-15 00:57:20.252789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.040 [2024-05-15 00:57:20.252828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.040 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.040 [2024-05-15 00:57:20.263845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.040 [2024-05-15 00:57:20.263883] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.040 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.040 [2024-05-15 00:57:20.279373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.040 [2024-05-15 00:57:20.279417] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.040 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.040 [2024-05-15 00:57:20.295114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.040 [2024-05-15 00:57:20.295158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.040 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.040 [2024-05-15 00:57:20.312178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.040 [2024-05-15 00:57:20.312219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.040 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.327620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.327666] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.337966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.337999] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.353887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.353927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.364777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.364813] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.379787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.379822] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.389963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.390000] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.404980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.405032] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.420907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.420954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.432014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.432047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.447455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.447488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.458486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.458517] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.469774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.469808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.482932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.482997] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.499488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.499533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.517011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.517081] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.532197] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.532250] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.547334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.547368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.563457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.563533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.298 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.298 [2024-05-15 00:57:20.574199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.298 [2024-05-15 00:57:20.574231] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.299 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.585569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.585650] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.601816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.601849] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.618414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.618477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.635223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.635255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.651529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.651565] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.662551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.662585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.673835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.673868] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.691258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.691295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.707506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.707547] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.719067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.719103] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.734550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.734613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.750975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.751016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.767888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.767937] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.784084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.784128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.794452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.794489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.809493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.809538] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.558 [2024-05-15 00:57:20.826514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.558 [2024-05-15 00:57:20.826570] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.558 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:20.844079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:20.844115] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:20.861287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:20.861334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:20.876084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:20.876124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:20.894639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:20.894719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:20.910641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:20.910678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:20.927683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:20.927721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:20.939109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:20.939148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:20.950340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:20.950390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:20.961792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:20.961826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:20.972683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:20.972715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:20.983751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:20.983785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:20.999166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:20.999209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:21.014622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:21.014679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:21.025271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.818 [2024-05-15 00:57:21.025307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.818 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.818 [2024-05-15 00:57:21.036902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.819 [2024-05-15 00:57:21.036950] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.819 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.819 [2024-05-15 00:57:21.053275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.819 [2024-05-15 00:57:21.053318] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.819 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.819 [2024-05-15 00:57:21.064319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.819 [2024-05-15 00:57:21.064368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.819 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.819 [2024-05-15 00:57:21.080247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.819 [2024-05-15 00:57:21.080294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.819 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:17.819 [2024-05-15 00:57:21.091561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:17.819 [2024-05-15 00:57:21.091606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:17.819 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.078 [2024-05-15 00:57:21.106895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.078 [2024-05-15 00:57:21.106936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.078 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.078 [2024-05-15 00:57:21.123585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.078 [2024-05-15 00:57:21.123665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.078 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.078 [2024-05-15 00:57:21.140410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.078 [2024-05-15 00:57:21.140459] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.078 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.078 [2024-05-15 00:57:21.150456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.150491] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.165382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.165417] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.177415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.177478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.194177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.194214] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.211583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.211631] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.227114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.227156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.238115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.238151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.249518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.249551] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.266864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.266898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.282852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.282889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.293650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.293693] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.304232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.304267] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.313778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.313816] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.325403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.325435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.336386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.336435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.347617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.347680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.079 [2024-05-15 00:57:21.358296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.079 [2024-05-15 00:57:21.358336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.079 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.339 [2024-05-15 00:57:21.373035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.339 [2024-05-15 00:57:21.373079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.339 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.339 [2024-05-15 00:57:21.390298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.339 [2024-05-15 00:57:21.390349] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.339 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.339 [2024-05-15 00:57:21.406021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.339 [2024-05-15 00:57:21.406071] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.339 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.339 [2024-05-15 00:57:21.415880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.339 [2024-05-15 00:57:21.415916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.339 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.339 [2024-05-15 00:57:21.431662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.339 [2024-05-15 00:57:21.431715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.446556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.446588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.463351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.463396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.478858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.478889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.488523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.488554] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.502765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.502794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.511908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.511937] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.527821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.527855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.542779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.542809] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.558251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.558282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.567799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.567829] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.579048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.579079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.591481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.591529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.608069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.608102] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.340 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.340 [2024-05-15 00:57:21.625684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.340 [2024-05-15 00:57:21.625727] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.599 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.599 [2024-05-15 00:57:21.640898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.599 [2024-05-15 00:57:21.640935] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.599 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.599 [2024-05-15 00:57:21.651171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.599 [2024-05-15 00:57:21.651210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.599 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.599 [2024-05-15 00:57:21.665894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.599 [2024-05-15 00:57:21.665932] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.676355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.676392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.690979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.691020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.701590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.701638] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.716355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.716398] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.726711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.726745] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.741217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.741270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.758296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.758350] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.773309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.773346] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.783679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.783714] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.798381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.798433] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.808512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.808558] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.822870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.822918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.839956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.840002] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.850739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.850785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.866225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.866271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.600 [2024-05-15 00:57:21.881632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.600 [2024-05-15 00:57:21.881672] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.600 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.859 [2024-05-15 00:57:21.897551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.859 [2024-05-15 00:57:21.897599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.859 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.859 [2024-05-15 00:57:21.909384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.859 [2024-05-15 00:57:21.909415] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.859 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.859 [2024-05-15 00:57:21.926970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.859 [2024-05-15 00:57:21.927015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.859 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.859 [2024-05-15 00:57:21.941753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.859 [2024-05-15 00:57:21.941788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.859 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.859 [2024-05-15 00:57:21.957686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.859 [2024-05-15 00:57:21.957722] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.859 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.859 [2024-05-15 00:57:21.967695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.859 [2024-05-15 00:57:21.967729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.859 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.859 [2024-05-15 00:57:21.983171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.859 [2024-05-15 00:57:21.983207] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.859 2024/05/15 00:57:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.859 [2024-05-15 00:57:21.998844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.859 [2024-05-15 00:57:21.998876] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.859 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.859 [2024-05-15 00:57:22.017717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.859 [2024-05-15 00:57:22.017760] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.859 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.859 [2024-05-15 00:57:22.032927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.859 [2024-05-15 00:57:22.033003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.859 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.859 [2024-05-15 00:57:22.044087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.859 [2024-05-15 00:57:22.044136] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.859 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.859 [2024-05-15 00:57:22.061035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.859 [2024-05-15 00:57:22.061091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.860 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.860 [2024-05-15 00:57:22.077088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.860 [2024-05-15 00:57:22.077124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.860 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.860 [2024-05-15 00:57:22.092618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.860 [2024-05-15 00:57:22.092653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.860 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.860 [2024-05-15 00:57:22.102874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.860 [2024-05-15 00:57:22.102907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.860 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.860 [2024-05-15 00:57:22.117864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.860 [2024-05-15 00:57:22.117898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.860 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.860 [2024-05-15 00:57:22.128042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.860 [2024-05-15 00:57:22.128079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.860 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:18.860 [2024-05-15 00:57:22.141882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.860 [2024-05-15 00:57:22.141916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.860 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.157911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.157945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.174963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.175014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.191185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.191220] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.207844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.207885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.225263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.225302] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.240558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.240607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.251052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.251103] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.265946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.265989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.275996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.276035] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.291045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.291083] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.309831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.309867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.320763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.320793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.333628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.333683] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.343344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.343374] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.358033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.358077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.119 [2024-05-15 00:57:22.373169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.119 [2024-05-15 00:57:22.373199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.119 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.120 [2024-05-15 00:57:22.389470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.120 [2024-05-15 00:57:22.389505] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.120 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.379 [2024-05-15 00:57:22.405638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.379 [2024-05-15 00:57:22.405669] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.379 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.379 [2024-05-15 00:57:22.416225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.379 [2024-05-15 00:57:22.416270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.379 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.379 [2024-05-15 00:57:22.427218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.379 [2024-05-15 00:57:22.427258] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.379 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.379 [2024-05-15 00:57:22.438178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.379 [2024-05-15 00:57:22.438225] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.379 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.379 [2024-05-15 00:57:22.452766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.379 [2024-05-15 00:57:22.452827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.379 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.379 [2024-05-15 00:57:22.463649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.379 [2024-05-15 00:57:22.463693] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.379 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.379 [2024-05-15 00:57:22.478213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.379 [2024-05-15 00:57:22.478258] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.379 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.379 [2024-05-15 00:57:22.488237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.379 [2024-05-15 00:57:22.488281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.380 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.380 [2024-05-15 00:57:22.503476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.380 [2024-05-15 00:57:22.503508] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.380 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.380 [2024-05-15 00:57:22.514218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.380 [2024-05-15 00:57:22.514263] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.380 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.380 [2024-05-15 00:57:22.529244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.380 [2024-05-15 00:57:22.529289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.380 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.380 [2024-05-15 00:57:22.540102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.380 [2024-05-15 00:57:22.540146] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.380 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.380 [2024-05-15 00:57:22.555189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.380 [2024-05-15 00:57:22.555221] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.380 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.380 [2024-05-15 00:57:22.565491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.380 [2024-05-15 00:57:22.565522] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.380 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.380 [2024-05-15 00:57:22.576504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.380 [2024-05-15 00:57:22.576550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.380 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.380 [2024-05-15 00:57:22.587785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.380 [2024-05-15 00:57:22.587829] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.380 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.380 [2024-05-15 00:57:22.598810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.380 [2024-05-15 00:57:22.598869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.380 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.380 [2024-05-15 00:57:22.611951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.380 [2024-05-15 00:57:22.611997] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.380 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.380 [2024-05-15 00:57:22.627725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.380 [2024-05-15 00:57:22.627786] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.380 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.380 [2024-05-15 00:57:22.645873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.380 [2024-05-15 00:57:22.645918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.380 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.380 [2024-05-15 00:57:22.661563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.380 [2024-05-15 00:57:22.661619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.639 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.639 [2024-05-15 00:57:22.677414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.639 [2024-05-15 00:57:22.677445] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.639 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.639 [2024-05-15 00:57:22.692446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.639 [2024-05-15 00:57:22.692478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.639 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.639 [2024-05-15 00:57:22.702158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.639 [2024-05-15 00:57:22.702190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.639 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.639 [2024-05-15 00:57:22.717204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.639 [2024-05-15 00:57:22.717236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.639 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.639 [2024-05-15 00:57:22.727361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.639 [2024-05-15 00:57:22.727391] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.639 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.639 [2024-05-15 00:57:22.742488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.639 [2024-05-15 00:57:22.742523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.639 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.639 [2024-05-15 00:57:22.759512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.639 [2024-05-15 00:57:22.759547] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.639 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.639 [2024-05-15 00:57:22.777226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.639 [2024-05-15 00:57:22.777277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.639 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.639 [2024-05-15 00:57:22.792296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.640 [2024-05-15 00:57:22.792329] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.640 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.640 [2024-05-15 00:57:22.808383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.640 [2024-05-15 00:57:22.808432] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.640 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.640 [2024-05-15 00:57:22.824204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.640 [2024-05-15 00:57:22.824256] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.640 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.640 [2024-05-15 00:57:22.834238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.640 [2024-05-15 00:57:22.834283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.640 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.640 [2024-05-15 00:57:22.845325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.640 [2024-05-15 00:57:22.845386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.640 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.640 [2024-05-15 00:57:22.858632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.640 [2024-05-15 00:57:22.858691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.640 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.640 [2024-05-15 00:57:22.869080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.640 [2024-05-15 00:57:22.869124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.640 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.640 [2024-05-15 00:57:22.884704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.640 [2024-05-15 00:57:22.884752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.640 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.640 [2024-05-15 00:57:22.894777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.640 [2024-05-15 00:57:22.894807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.640 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.640 [2024-05-15 00:57:22.909404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.640 [2024-05-15 00:57:22.909450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.640 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.640 [2024-05-15 00:57:22.919320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.640 [2024-05-15 00:57:22.919352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.640 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:22.933259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:22.933305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:22.943442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:22.943504] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:22.954339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:22.954387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:22.965128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:22.965174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:22.976081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:22.976137] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:22.987967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:22.988001] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:23.005822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:23.005887] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:23.021283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:23.021333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:23.031859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:23.031891] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:23.046355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:23.046403] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:23.064721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:23.064771] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:23.080677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:23.080724] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:23.098241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:23.098288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:23.113717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.899 [2024-05-15 00:57:23.113748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.899 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.899 [2024-05-15 00:57:23.124385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.900 [2024-05-15 00:57:23.124414] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.900 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.900 [2024-05-15 00:57:23.139152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.900 [2024-05-15 00:57:23.139183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.900 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.900 [2024-05-15 00:57:23.156067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.900 [2024-05-15 00:57:23.156111] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.900 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.900 [2024-05-15 00:57:23.166373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.900 [2024-05-15 00:57:23.166417] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.900 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:19.900 [2024-05-15 00:57:23.182635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.900 [2024-05-15 00:57:23.182695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.158 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.158 [2024-05-15 00:57:23.198683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.158 [2024-05-15 00:57:23.198734] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.158 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.158 [2024-05-15 00:57:23.216578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.158 [2024-05-15 00:57:23.216617] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.158 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.158 [2024-05-15 00:57:23.231658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.158 [2024-05-15 00:57:23.231687] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.158 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.158 [2024-05-15 00:57:23.242133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.158 [2024-05-15 00:57:23.242193] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.158 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.158 [2024-05-15 00:57:23.256574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.158 [2024-05-15 00:57:23.256630] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.158 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.158 [2024-05-15 00:57:23.268379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.158 [2024-05-15 00:57:23.268424] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.158 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.159 [2024-05-15 00:57:23.286830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.159 [2024-05-15 00:57:23.286875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.159 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.159 [2024-05-15 00:57:23.302250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.159 [2024-05-15 00:57:23.302286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.159 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.159 [2024-05-15 00:57:23.318848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.159 [2024-05-15 00:57:23.318881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.159 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.159 [2024-05-15 00:57:23.335965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.159 [2024-05-15 00:57:23.336025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.159 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.159 [2024-05-15 00:57:23.351937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.159 [2024-05-15 00:57:23.351982] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.159 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.159 [2024-05-15 00:57:23.369362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.159 [2024-05-15 00:57:23.369413] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.159 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.159 [2024-05-15 00:57:23.384906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.159 [2024-05-15 00:57:23.384967] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.159 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.159 [2024-05-15 00:57:23.395678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.159 [2024-05-15 00:57:23.395723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.159 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.159 [2024-05-15 00:57:23.409904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.159 [2024-05-15 00:57:23.409949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.159 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.159 [2024-05-15 00:57:23.425587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.159 [2024-05-15 00:57:23.425659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.159 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.159 [2024-05-15 00:57:23.442515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.159 [2024-05-15 00:57:23.442563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.457973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.458017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.472925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.472971] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.482585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.482655] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.496446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.496490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.506534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.506563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.521140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.521186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.530481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.530525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.546468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.546519] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.564482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.564527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.579928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.579973] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.590734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.590764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.605707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.605763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.622217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.622246] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.632180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.632226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.647016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.647063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.662254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.662300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.672419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.672466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.683265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.683295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.417 [2024-05-15 00:57:23.693989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.417 [2024-05-15 00:57:23.694019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.417 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.676 [2024-05-15 00:57:23.705020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.676 [2024-05-15 00:57:23.705080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.676 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.676 [2024-05-15 00:57:23.717351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.676 [2024-05-15 00:57:23.717396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.676 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.676 [2024-05-15 00:57:23.727496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.676 [2024-05-15 00:57:23.727540] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.676 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.676 [2024-05-15 00:57:23.738312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.676 [2024-05-15 00:57:23.738357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.676 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.676 [2024-05-15 00:57:23.749335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.676 [2024-05-15 00:57:23.749379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.676 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.676 [2024-05-15 00:57:23.760376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.676 [2024-05-15 00:57:23.760422] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.676 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.676 [2024-05-15 00:57:23.776281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.676 [2024-05-15 00:57:23.776326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.676 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.676 [2024-05-15 00:57:23.794465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.676 [2024-05-15 00:57:23.794512] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.676 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.676 [2024-05-15 00:57:23.809357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.676 [2024-05-15 00:57:23.809403] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.676 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.676 [2024-05-15 00:57:23.820250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.676 [2024-05-15 00:57:23.820282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.677 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.677 [2024-05-15 00:57:23.835153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.677 [2024-05-15 00:57:23.835185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.677 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.677 [2024-05-15 00:57:23.845767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.677 [2024-05-15 00:57:23.845797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.677 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.677 [2024-05-15 00:57:23.856258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.677 [2024-05-15 00:57:23.856290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.677 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.677 [2024-05-15 00:57:23.867332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.677 [2024-05-15 00:57:23.867365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.677 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.677 [2024-05-15 00:57:23.878269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.677 [2024-05-15 00:57:23.878317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.677 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.677 [2024-05-15 00:57:23.890844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.677 [2024-05-15 00:57:23.890875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.677 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.677 [2024-05-15 00:57:23.907709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.677 [2024-05-15 00:57:23.907754] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.677 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.677 [2024-05-15 00:57:23.922985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.677 [2024-05-15 00:57:23.923029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.677 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.677 [2024-05-15 00:57:23.939525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.677 [2024-05-15 00:57:23.939570] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.677 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.677 [2024-05-15 00:57:23.954717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.677 [2024-05-15 00:57:23.954747] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.677 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:23.964535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:23.964565] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:23.975986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:23.976060] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:23.987243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:23.987272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:23.998163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:23.998206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.012986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.013047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.030192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.030238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.046150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.046196] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.062394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.062441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.079427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.079463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.097053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.097100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.107809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.107871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.122294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.122328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.132835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.132881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.148228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.148275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.159003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.159048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.174243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.174291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.190369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.190415] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.937 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:20.937 [2024-05-15 00:57:24.209005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.937 [2024-05-15 00:57:24.209041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.938 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.224731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.197 [2024-05-15 00:57:24.224789] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.197 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.240646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.197 [2024-05-15 00:57:24.240708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.197 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.255381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.197 [2024-05-15 00:57:24.255412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.197 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.271224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.197 [2024-05-15 00:57:24.271254] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.197 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.288189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.197 [2024-05-15 00:57:24.288235] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.197 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.302822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.197 [2024-05-15 00:57:24.302852] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.197 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.318660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.197 [2024-05-15 00:57:24.318707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.197 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.335724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.197 [2024-05-15 00:57:24.335758] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.197 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.351631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.197 [2024-05-15 00:57:24.351675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.197 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.368905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.197 [2024-05-15 00:57:24.368956] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.197 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.384998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.197 [2024-05-15 00:57:24.385031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.197 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.400176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.197 [2024-05-15 00:57:24.400208] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.197 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.415816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.197 [2024-05-15 00:57:24.415848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.197 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.197 [2024-05-15 00:57:24.432097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.198 [2024-05-15 00:57:24.432279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.198 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.198 [2024-05-15 00:57:24.442463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.198 [2024-05-15 00:57:24.442654] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.198 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.198 [2024-05-15 00:57:24.457520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.198 [2024-05-15 00:57:24.457710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.198 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.198 [2024-05-15 00:57:24.473996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.198 [2024-05-15 00:57:24.474174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.198 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.486 [2024-05-15 00:57:24.491926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.486 [2024-05-15 00:57:24.492106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.486 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.486 [2024-05-15 00:57:24.506887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.486 [2024-05-15 00:57:24.507084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.486 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.517312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.517510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.531691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.531886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.541386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.541421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.556231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.556263] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.573104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.573150] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.583518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.583563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.598818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.598848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.614074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.614120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.624973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.625018] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.635710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.635753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.646834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.646863] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.657503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.657547] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.674534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.674580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.690442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.690474] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.700765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.700796] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.711196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.711229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.722173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.722204] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.736982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.737013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 00:31:21.487 Latency(us) 00:31:21.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.487 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:21.487 Nvme1n1 : 5.01 11484.43 89.72 0.00 0.00 11131.16 4944.99 21686.46 00:31:21.487 =================================================================================================================== 00:31:21.487 Total : 11484.43 89.72 0.00 0.00 11131.16 4944.99 21686.46 00:31:21.487 [2024-05-15 00:57:24.747328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.747356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.755316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.755343] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.487 [2024-05-15 00:57:24.763305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.487 [2024-05-15 00:57:24.763328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.487 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.746 [2024-05-15 00:57:24.775344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.746 [2024-05-15 00:57:24.775377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.746 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.746 [2024-05-15 00:57:24.787345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.746 [2024-05-15 00:57:24.787375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.746 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.746 [2024-05-15 00:57:24.795329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.746 [2024-05-15 00:57:24.795360] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.746 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.746 [2024-05-15 00:57:24.807355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.746 [2024-05-15 00:57:24.807388] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.746 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.746 [2024-05-15 00:57:24.815333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.746 [2024-05-15 00:57:24.815361] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.746 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.746 [2024-05-15 00:57:24.827360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.746 [2024-05-15 00:57:24.827394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 [2024-05-15 00:57:24.839365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.747 [2024-05-15 00:57:24.839405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 [2024-05-15 00:57:24.847348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.747 [2024-05-15 00:57:24.847381] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 [2024-05-15 00:57:24.855355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.747 [2024-05-15 00:57:24.855385] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 [2024-05-15 00:57:24.867361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.747 [2024-05-15 00:57:24.867392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 [2024-05-15 00:57:24.879387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.747 [2024-05-15 00:57:24.879418] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 [2024-05-15 00:57:24.887346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.747 [2024-05-15 00:57:24.887368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 [2024-05-15 00:57:24.895348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.747 [2024-05-15 00:57:24.895373] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 [2024-05-15 00:57:24.907390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.747 [2024-05-15 00:57:24.907426] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 [2024-05-15 00:57:24.919379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.747 [2024-05-15 00:57:24.919409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 [2024-05-15 00:57:24.927366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.747 [2024-05-15 00:57:24.927394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 [2024-05-15 00:57:24.939380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.747 [2024-05-15 00:57:24.939410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 [2024-05-15 00:57:24.947362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.747 [2024-05-15 00:57:24.947386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 [2024-05-15 00:57:24.955362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.747 [2024-05-15 00:57:24.955387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.747 2024/05/15 00:57:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:21.747 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (91981) - No such process 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 91981 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.747 delay0 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:21.747 00:57:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:22.006 [2024-05-15 00:57:25.146814] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:28.571 Initializing NVMe Controllers 00:31:28.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:28.571 Initialization complete. Launching workers. 00:31:28.571 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 66 00:31:28.571 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 353, failed to submit 33 00:31:28.571 success 168, unsuccess 185, failed 0 00:31:28.571 00:57:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:28.571 00:57:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:28.571 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:28.571 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:31:28.571 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:28.571 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:31:28.571 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:28.571 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:28.571 rmmod nvme_tcp 00:31:28.571 rmmod nvme_fabrics 00:31:28.571 rmmod nvme_keyring 00:31:28.571 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:28.571 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:31:28.571 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 91826 ']' 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 91826 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' -z 91826 ']' 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # kill -0 91826 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # uname 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 91826 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # echo 'killing process with pid 91826' 00:31:28.572 killing process with pid 91826 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # kill 91826 00:31:28.572 [2024-05-15 00:57:31.319427] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@971 -- # wait 91826 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:28.572 00:31:28.572 real 0m24.099s 00:31:28.572 user 0m39.206s 00:31:28.572 sys 0m6.713s 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:28.572 00:57:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:28.572 ************************************ 00:31:28.572 END TEST nvmf_zcopy 00:31:28.572 ************************************ 00:31:28.572 00:57:31 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:31:28.572 00:57:31 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:31:28.572 00:57:31 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:28.572 00:57:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:28.572 ************************************ 00:31:28.572 START TEST nvmf_nmic 00:31:28.572 ************************************ 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:31:28.572 * Looking for test storage... 00:31:28.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:28.572 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:28.573 Cannot find device "nvmf_tgt_br" 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:28.573 Cannot find device "nvmf_tgt_br2" 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:28.573 Cannot find device "nvmf_tgt_br" 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:28.573 Cannot find device "nvmf_tgt_br2" 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:28.573 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:28.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:28.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:28.832 00:57:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:28.832 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:28.832 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:28.832 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:28.832 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:28.832 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:28.832 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:28.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:31:28.832 00:31:28.832 --- 10.0.0.2 ping statistics --- 00:31:28.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.832 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:31:28.832 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:28.832 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:28.832 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:31:28.832 00:31:28.832 --- 10.0.0.3 ping statistics --- 00:31:28.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.832 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:31:28.832 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:28.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:31:28.832 00:31:28.832 --- 10.0.0.1 ping statistics --- 00:31:28.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.833 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=92303 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 92303 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # '[' -z 92303 ']' 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:28.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:28.833 00:57:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:29.091 [2024-05-15 00:57:32.148621] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:29.092 [2024-05-15 00:57:32.148983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.092 [2024-05-15 00:57:32.292000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:29.350 [2024-05-15 00:57:32.388054] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:29.350 [2024-05-15 00:57:32.388124] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:29.350 [2024-05-15 00:57:32.388138] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:29.350 [2024-05-15 00:57:32.388149] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:29.350 [2024-05-15 00:57:32.388158] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:29.350 [2024-05-15 00:57:32.388331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.350 [2024-05-15 00:57:32.388934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:29.350 [2024-05-15 00:57:32.389129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:29.350 [2024-05-15 00:57:32.389208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@861 -- # return 0 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:29.918 [2024-05-15 00:57:33.158827] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:29.918 Malloc0 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.918 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:30.176 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:30.176 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:30.176 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:30.176 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:30.176 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:30.177 [2024-05-15 00:57:33.224730] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:30.177 [2024-05-15 00:57:33.224997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:30.177 test case1: single bdev can't be used in multiple subsystems 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:30.177 [2024-05-15 00:57:33.248777] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:30.177 [2024-05-15 00:57:33.248812] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:30.177 [2024-05-15 00:57:33.248824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.177 2024/05/15 00:57:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:30.177 request: 00:31:30.177 { 00:31:30.177 "method": "nvmf_subsystem_add_ns", 00:31:30.177 "params": { 00:31:30.177 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:30.177 "namespace": { 00:31:30.177 "bdev_name": "Malloc0", 00:31:30.177 "no_auto_visible": false 00:31:30.177 } 00:31:30.177 } 00:31:30.177 } 00:31:30.177 Got JSON-RPC error response 00:31:30.177 GoRPCClient: error on JSON-RPC call 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:30.177 Adding namespace failed - expected result. 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:30.177 test case2: host connect to nvmf target in multiple paths 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:30.177 [2024-05-15 00:57:33.260916] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:30.177 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:30.435 00:57:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:30.435 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local i=0 00:31:30.435 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:31:30.435 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:31:30.435 00:57:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # sleep 2 00:31:32.337 00:57:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:31:32.337 00:57:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:31:32.337 00:57:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:31:32.337 00:57:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:31:32.337 00:57:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:31:32.337 00:57:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # return 0 00:31:32.337 00:57:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:32.595 [global] 00:31:32.595 thread=1 00:31:32.595 invalidate=1 00:31:32.595 rw=write 00:31:32.595 time_based=1 00:31:32.595 runtime=1 00:31:32.595 ioengine=libaio 00:31:32.595 direct=1 00:31:32.595 bs=4096 00:31:32.595 iodepth=1 00:31:32.595 norandommap=0 00:31:32.595 numjobs=1 00:31:32.595 00:31:32.595 verify_dump=1 00:31:32.595 verify_backlog=512 00:31:32.595 verify_state_save=0 00:31:32.595 do_verify=1 00:31:32.595 verify=crc32c-intel 00:31:32.595 [job0] 00:31:32.595 filename=/dev/nvme0n1 00:31:32.595 Could not set queue depth (nvme0n1) 00:31:32.595 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:32.595 fio-3.35 00:31:32.595 Starting 1 thread 00:31:33.972 00:31:33.972 job0: (groupid=0, jobs=1): err= 0: pid=92414: Wed May 15 00:57:36 2024 00:31:33.972 read: IOPS=3106, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1000msec) 00:31:33.972 slat (nsec): min=13775, max=61748, avg=18978.48, stdev=5597.25 00:31:33.972 clat (usec): min=124, max=1811, avg=147.64, stdev=33.16 00:31:33.972 lat (usec): min=138, max=1831, avg=166.62, stdev=34.09 00:31:33.972 clat percentiles (usec): 00:31:33.972 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:31:33.972 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:31:33.972 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 169], 00:31:33.972 | 99.00th=[ 190], 99.50th=[ 225], 99.90th=[ 306], 99.95th=[ 347], 00:31:33.972 | 99.99th=[ 1811] 00:31:33.972 write: IOPS=3584, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1000msec); 0 zone resets 00:31:33.972 slat (usec): min=19, max=100, avg=26.34, stdev= 8.88 00:31:33.972 clat (usec): min=84, max=791, avg=104.45, stdev=16.76 00:31:33.972 lat (usec): min=105, max=824, avg=130.78, stdev=20.12 00:31:33.972 clat percentiles (usec): 00:31:33.972 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 96], 00:31:33.972 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 104], 00:31:33.972 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 118], 95.00th=[ 125], 00:31:33.972 | 99.00th=[ 139], 99.50th=[ 153], 99.90th=[ 235], 99.95th=[ 379], 00:31:33.972 | 99.99th=[ 791] 00:31:33.972 bw ( KiB/s): min=14296, max=14296, per=99.72%, avg=14296.00, stdev= 0.00, samples=1 00:31:33.972 iops : min= 3574, max= 3574, avg=3574.00, stdev= 0.00, samples=1 00:31:33.972 lat (usec) : 100=23.12%, 250=76.67%, 500=0.18%, 1000=0.01% 00:31:33.972 lat (msec) : 2=0.01% 00:31:33.972 cpu : usr=3.10%, sys=11.20%, ctx=6690, majf=0, minf=2 00:31:33.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.972 issued rwts: total=3106,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:33.972 00:31:33.972 Run status group 0 (all jobs): 00:31:33.972 READ: bw=12.1MiB/s (12.7MB/s), 12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s), io=12.1MiB (12.7MB), run=1000-1000msec 00:31:33.972 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1000-1000msec 00:31:33.972 00:31:33.972 Disk stats (read/write): 00:31:33.972 nvme0n1: ios=2951/3072, merge=0/0, ticks=493/372, in_queue=865, util=91.38% 00:31:33.972 00:57:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:33.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:33.972 00:57:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:33.972 00:57:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # local i=0 00:31:33.972 00:57:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:31:33.972 00:57:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:33.972 00:57:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:31:33.972 00:57:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:33.972 00:57:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1228 -- # return 0 00:31:33.972 00:57:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:33.972 00:57:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:33.972 00:57:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:33.972 00:57:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:33.972 rmmod nvme_tcp 00:31:33.972 rmmod nvme_fabrics 00:31:33.972 rmmod nvme_keyring 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 92303 ']' 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 92303 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' -z 92303 ']' 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # kill -0 92303 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # uname 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 92303 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:31:33.972 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:31:33.972 killing process with pid 92303 00:31:33.973 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # echo 'killing process with pid 92303' 00:31:33.973 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # kill 92303 00:31:33.973 [2024-05-15 00:57:37.132314] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:33.973 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@971 -- # wait 92303 00:31:34.232 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:34.232 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:34.232 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:34.232 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:34.232 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:34.232 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.232 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:34.232 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.232 00:57:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:34.232 00:31:34.232 real 0m5.780s 00:31:34.232 user 0m19.404s 00:31:34.232 sys 0m1.426s 00:31:34.232 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:34.232 00:57:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:34.232 ************************************ 00:31:34.232 END TEST nvmf_nmic 00:31:34.232 ************************************ 00:31:34.232 00:57:37 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:31:34.232 00:57:37 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:31:34.232 00:57:37 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:34.232 00:57:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:34.232 ************************************ 00:31:34.232 START TEST nvmf_fio_target 00:31:34.232 ************************************ 00:31:34.232 00:57:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:31:34.491 * Looking for test storage... 00:31:34.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:34.491 00:57:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:34.491 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:34.491 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:34.491 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:34.491 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:34.491 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:34.491 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:34.491 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:34.491 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:34.491 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:34.491 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:34.491 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:34.491 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:34.492 Cannot find device "nvmf_tgt_br" 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:34.492 Cannot find device "nvmf_tgt_br2" 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:34.492 Cannot find device "nvmf_tgt_br" 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:34.492 Cannot find device "nvmf_tgt_br2" 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:34.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:34.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:34.492 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:34.751 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:34.751 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:34.751 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:34.751 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:34.751 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:34.751 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:34.751 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:34.751 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:34.751 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:34.751 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:34.751 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:34.751 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:34.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:31:34.752 00:31:34.752 --- 10.0.0.2 ping statistics --- 00:31:34.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.752 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:34.752 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:34.752 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:31:34.752 00:31:34.752 --- 10.0.0.3 ping statistics --- 00:31:34.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.752 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:34.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:31:34.752 00:31:34.752 --- 10.0.0.1 ping statistics --- 00:31:34.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.752 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=92591 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 92591 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # '[' -z 92591 ']' 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:34.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:34.752 00:57:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:34.752 [2024-05-15 00:57:37.949029] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:34.752 [2024-05-15 00:57:37.949134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.011 [2024-05-15 00:57:38.086433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:35.011 [2024-05-15 00:57:38.173907] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.011 [2024-05-15 00:57:38.173960] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.011 [2024-05-15 00:57:38.173986] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.011 [2024-05-15 00:57:38.173994] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.011 [2024-05-15 00:57:38.174002] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.011 [2024-05-15 00:57:38.174654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.011 [2024-05-15 00:57:38.174788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.011 [2024-05-15 00:57:38.174876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:35.011 [2024-05-15 00:57:38.174880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.947 00:57:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:35.947 00:57:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@861 -- # return 0 00:31:35.947 00:57:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:35.947 00:57:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:35.947 00:57:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:35.947 00:57:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.947 00:57:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:35.947 [2024-05-15 00:57:39.218300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.206 00:57:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:36.465 00:57:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:36.465 00:57:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:36.724 00:57:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:36.724 00:57:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:36.983 00:57:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:36.983 00:57:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:37.241 00:57:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:37.241 00:57:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:37.499 00:57:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:37.757 00:57:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:37.757 00:57:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:38.051 00:57:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:38.051 00:57:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:38.323 00:57:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:38.323 00:57:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:38.582 00:57:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:38.840 00:57:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:38.840 00:57:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:39.098 00:57:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:39.098 00:57:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:39.357 00:57:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:39.615 [2024-05-15 00:57:42.653470] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:39.615 [2024-05-15 00:57:42.654192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.616 00:57:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:39.616 00:57:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:39.874 00:57:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:40.133 00:57:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:40.133 00:57:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local i=0 00:31:40.133 00:57:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:31:40.133 00:57:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # [[ -n 4 ]] 00:31:40.133 00:57:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # nvme_device_counter=4 00:31:40.133 00:57:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # sleep 2 00:31:42.037 00:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:31:42.037 00:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:31:42.037 00:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:31:42.037 00:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_devices=4 00:31:42.037 00:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:31:42.037 00:57:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # return 0 00:31:42.037 00:57:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:42.296 [global] 00:31:42.296 thread=1 00:31:42.296 invalidate=1 00:31:42.296 rw=write 00:31:42.296 time_based=1 00:31:42.296 runtime=1 00:31:42.296 ioengine=libaio 00:31:42.296 direct=1 00:31:42.296 bs=4096 00:31:42.296 iodepth=1 00:31:42.296 norandommap=0 00:31:42.296 numjobs=1 00:31:42.296 00:31:42.296 verify_dump=1 00:31:42.296 verify_backlog=512 00:31:42.296 verify_state_save=0 00:31:42.296 do_verify=1 00:31:42.296 verify=crc32c-intel 00:31:42.296 [job0] 00:31:42.296 filename=/dev/nvme0n1 00:31:42.296 [job1] 00:31:42.296 filename=/dev/nvme0n2 00:31:42.296 [job2] 00:31:42.296 filename=/dev/nvme0n3 00:31:42.296 [job3] 00:31:42.296 filename=/dev/nvme0n4 00:31:42.296 Could not set queue depth (nvme0n1) 00:31:42.296 Could not set queue depth (nvme0n2) 00:31:42.296 Could not set queue depth (nvme0n3) 00:31:42.296 Could not set queue depth (nvme0n4) 00:31:42.296 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:42.296 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:42.296 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:42.296 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:42.296 fio-3.35 00:31:42.296 Starting 4 threads 00:31:43.719 00:31:43.719 job0: (groupid=0, jobs=1): err= 0: pid=92879: Wed May 15 00:57:46 2024 00:31:43.719 read: IOPS=1886, BW=7544KiB/s (7726kB/s)(7552KiB/1001msec) 00:31:43.719 slat (nsec): min=12431, max=95517, avg=15577.81, stdev=2466.96 00:31:43.719 clat (usec): min=217, max=2051, avg=262.54, stdev=43.57 00:31:43.719 lat (usec): min=232, max=2064, avg=278.12, stdev=43.57 00:31:43.719 clat percentiles (usec): 00:31:43.719 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 251], 00:31:43.719 | 30.00th=[ 255], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:31:43.719 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 277], 95.00th=[ 281], 00:31:43.719 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 594], 99.95th=[ 2057], 00:31:43.719 | 99.99th=[ 2057] 00:31:43.719 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:43.719 slat (nsec): min=15013, max=96104, avg=23462.96, stdev=4688.06 00:31:43.719 clat (usec): min=97, max=309, avg=204.79, stdev=13.75 00:31:43.719 lat (usec): min=131, max=385, avg=228.26, stdev=13.46 00:31:43.719 clat percentiles (usec): 00:31:43.719 | 1.00th=[ 174], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:31:43.719 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:31:43.719 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 227], 00:31:43.719 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 285], 99.95th=[ 289], 00:31:43.719 | 99.99th=[ 310] 00:31:43.719 bw ( KiB/s): min= 8192, max= 8192, per=20.13%, avg=8192.00, stdev= 0.00, samples=1 00:31:43.719 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:43.719 lat (usec) : 100=0.03%, 250=58.69%, 500=41.23%, 750=0.03% 00:31:43.719 lat (msec) : 4=0.03% 00:31:43.719 cpu : usr=2.00%, sys=5.60%, ctx=3939, majf=0, minf=13 00:31:43.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.719 issued rwts: total=1888,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.719 job1: (groupid=0, jobs=1): err= 0: pid=92880: Wed May 15 00:57:46 2024 00:31:43.719 read: IOPS=1887, BW=7548KiB/s (7730kB/s)(7556KiB/1001msec) 00:31:43.719 slat (nsec): min=11978, max=47339, avg=14424.65, stdev=2776.16 00:31:43.719 clat (usec): min=159, max=2057, avg=263.72, stdev=43.88 00:31:43.719 lat (usec): min=172, max=2072, avg=278.15, stdev=43.84 00:31:43.719 clat percentiles (usec): 00:31:43.719 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 253], 00:31:43.719 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 265], 00:31:43.719 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 277], 95.00th=[ 281], 00:31:43.719 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 594], 99.95th=[ 2057], 00:31:43.719 | 99.99th=[ 2057] 00:31:43.719 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:43.719 slat (nsec): min=15337, max=94910, avg=23380.49, stdev=4837.15 00:31:43.719 clat (usec): min=103, max=297, avg=204.73, stdev=13.93 00:31:43.719 lat (usec): min=133, max=319, avg=228.11, stdev=13.18 00:31:43.719 clat percentiles (usec): 00:31:43.719 | 1.00th=[ 174], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:31:43.719 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:31:43.719 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 225], 00:31:43.719 | 99.00th=[ 237], 99.50th=[ 251], 99.90th=[ 289], 99.95th=[ 297], 00:31:43.719 | 99.99th=[ 297] 00:31:43.719 bw ( KiB/s): min= 8192, max= 8192, per=20.13%, avg=8192.00, stdev= 0.00, samples=1 00:31:43.719 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:43.719 lat (usec) : 250=58.12%, 500=41.83%, 750=0.03% 00:31:43.719 lat (msec) : 4=0.03% 00:31:43.719 cpu : usr=1.50%, sys=6.00%, ctx=3941, majf=0, minf=12 00:31:43.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.719 issued rwts: total=1889,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.719 job2: (groupid=0, jobs=1): err= 0: pid=92881: Wed May 15 00:57:46 2024 00:31:43.719 read: IOPS=2639, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:31:43.719 slat (nsec): min=14046, max=44935, avg=17416.13, stdev=2569.37 00:31:43.719 clat (usec): min=148, max=1787, avg=173.30, stdev=37.41 00:31:43.719 lat (usec): min=162, max=1807, avg=190.72, stdev=37.60 00:31:43.719 clat percentiles (usec): 00:31:43.719 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 163], 00:31:43.719 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:31:43.719 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 190], 00:31:43.719 | 99.00th=[ 215], 99.50th=[ 285], 99.90th=[ 424], 99.95th=[ 816], 00:31:43.719 | 99.99th=[ 1795] 00:31:43.719 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:31:43.719 slat (nsec): min=17300, max=93773, avg=24708.20, stdev=4680.52 00:31:43.719 clat (usec): min=111, max=275, avg=133.39, stdev= 9.83 00:31:43.719 lat (usec): min=133, max=295, avg=158.10, stdev=11.18 00:31:43.719 clat percentiles (usec): 00:31:43.719 | 1.00th=[ 118], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 127], 00:31:43.719 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 135], 00:31:43.719 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 149], 00:31:43.719 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 237], 99.95th=[ 269], 00:31:43.719 | 99.99th=[ 277] 00:31:43.719 bw ( KiB/s): min=12288, max=12288, per=30.19%, avg=12288.00, stdev= 0.00, samples=1 00:31:43.719 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:43.719 lat (usec) : 250=99.54%, 500=0.42%, 1000=0.02% 00:31:43.719 lat (msec) : 2=0.02% 00:31:43.719 cpu : usr=2.30%, sys=9.00%, ctx=5723, majf=0, minf=7 00:31:43.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.719 issued rwts: total=2642,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.719 job3: (groupid=0, jobs=1): err= 0: pid=92882: Wed May 15 00:57:46 2024 00:31:43.719 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:31:43.719 slat (nsec): min=13746, max=77625, avg=16670.35, stdev=2584.01 00:31:43.719 clat (usec): min=119, max=2406, avg=180.95, stdev=46.85 00:31:43.719 lat (usec): min=173, max=2427, avg=197.62, stdev=46.97 00:31:43.719 clat percentiles (usec): 00:31:43.719 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 167], 20.00th=[ 172], 00:31:43.719 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:31:43.719 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 198], 00:31:43.719 | 99.00th=[ 215], 99.50th=[ 229], 99.90th=[ 562], 99.95th=[ 627], 00:31:43.719 | 99.99th=[ 2409] 00:31:43.719 write: IOPS=3013, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1001msec); 0 zone resets 00:31:43.719 slat (nsec): min=19780, max=87527, avg=23611.72, stdev=4236.27 00:31:43.719 clat (usec): min=106, max=492, avg=136.88, stdev=12.29 00:31:43.719 lat (usec): min=135, max=518, avg=160.49, stdev=13.45 00:31:43.719 clat percentiles (usec): 00:31:43.719 | 1.00th=[ 121], 5.00th=[ 124], 10.00th=[ 126], 20.00th=[ 129], 00:31:43.719 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:31:43.719 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 155], 00:31:43.719 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 225], 99.95th=[ 245], 00:31:43.719 | 99.99th=[ 494] 00:31:43.719 bw ( KiB/s): min=12288, max=12288, per=30.19%, avg=12288.00, stdev= 0.00, samples=1 00:31:43.719 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:43.719 lat (usec) : 250=99.82%, 500=0.13%, 750=0.04% 00:31:43.719 lat (msec) : 4=0.02% 00:31:43.719 cpu : usr=2.00%, sys=8.70%, ctx=5578, majf=0, minf=5 00:31:43.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.719 issued rwts: total=2560,3017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:43.719 00:31:43.719 Run status group 0 (all jobs): 00:31:43.719 READ: bw=35.0MiB/s (36.7MB/s), 7544KiB/s-10.3MiB/s (7726kB/s-10.8MB/s), io=35.1MiB (36.8MB), run=1001-1001msec 00:31:43.719 WRITE: bw=39.7MiB/s (41.7MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.8MiB (41.7MB), run=1001-1001msec 00:31:43.719 00:31:43.719 Disk stats (read/write): 00:31:43.719 nvme0n1: ios=1586/1885, merge=0/0, ticks=487/401, in_queue=888, util=92.98% 00:31:43.719 nvme0n2: ios=1585/1885, merge=0/0, ticks=418/406, in_queue=824, util=88.66% 00:31:43.719 nvme0n3: ios=2345/2560, merge=0/0, ticks=418/371, in_queue=789, util=89.27% 00:31:43.719 nvme0n4: ios=2291/2560, merge=0/0, ticks=496/377, in_queue=873, util=93.67% 00:31:43.719 00:57:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:43.719 [global] 00:31:43.719 thread=1 00:31:43.719 invalidate=1 00:31:43.719 rw=randwrite 00:31:43.719 time_based=1 00:31:43.719 runtime=1 00:31:43.719 ioengine=libaio 00:31:43.719 direct=1 00:31:43.719 bs=4096 00:31:43.719 iodepth=1 00:31:43.719 norandommap=0 00:31:43.719 numjobs=1 00:31:43.719 00:31:43.719 verify_dump=1 00:31:43.719 verify_backlog=512 00:31:43.719 verify_state_save=0 00:31:43.719 do_verify=1 00:31:43.719 verify=crc32c-intel 00:31:43.719 [job0] 00:31:43.719 filename=/dev/nvme0n1 00:31:43.719 [job1] 00:31:43.719 filename=/dev/nvme0n2 00:31:43.719 [job2] 00:31:43.719 filename=/dev/nvme0n3 00:31:43.719 [job3] 00:31:43.720 filename=/dev/nvme0n4 00:31:43.720 Could not set queue depth (nvme0n1) 00:31:43.720 Could not set queue depth (nvme0n2) 00:31:43.720 Could not set queue depth (nvme0n3) 00:31:43.720 Could not set queue depth (nvme0n4) 00:31:43.720 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.720 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.720 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.720 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.720 fio-3.35 00:31:43.720 Starting 4 threads 00:31:45.094 00:31:45.094 job0: (groupid=0, jobs=1): err= 0: pid=92941: Wed May 15 00:57:48 2024 00:31:45.094 read: IOPS=2893, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec) 00:31:45.094 slat (nsec): min=13583, max=38410, avg=15786.57, stdev=1958.83 00:31:45.094 clat (usec): min=135, max=1574, avg=161.14, stdev=32.13 00:31:45.094 lat (usec): min=153, max=1591, avg=176.93, stdev=32.23 00:31:45.094 clat percentiles (usec): 00:31:45.094 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:31:45.094 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:31:45.094 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 180], 00:31:45.094 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 285], 99.95th=[ 322], 00:31:45.094 | 99.99th=[ 1582] 00:31:45.094 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:31:45.094 slat (nsec): min=19773, max=69952, avg=22342.56, stdev=4169.24 00:31:45.094 clat (usec): min=89, max=1698, avg=133.01, stdev=43.17 00:31:45.094 lat (usec): min=122, max=1720, avg=155.36, stdev=44.04 00:31:45.094 clat percentiles (usec): 00:31:45.094 | 1.00th=[ 105], 5.00th=[ 110], 10.00th=[ 112], 20.00th=[ 115], 00:31:45.094 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 125], 00:31:45.094 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 200], 95.00th=[ 212], 00:31:45.094 | 99.00th=[ 229], 99.50th=[ 243], 99.90th=[ 338], 99.95th=[ 510], 00:31:45.094 | 99.99th=[ 1696] 00:31:45.094 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:31:45.094 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:45.094 lat (usec) : 100=0.03%, 250=99.36%, 500=0.55%, 750=0.02% 00:31:45.094 lat (msec) : 2=0.03% 00:31:45.094 cpu : usr=2.60%, sys=8.10%, ctx=5970, majf=0, minf=9 00:31:45.094 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.094 issued rwts: total=2896,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.094 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:45.094 job1: (groupid=0, jobs=1): err= 0: pid=92942: Wed May 15 00:57:48 2024 00:31:45.094 read: IOPS=1565, BW=6262KiB/s (6412kB/s)(6268KiB/1001msec) 00:31:45.094 slat (nsec): min=9187, max=34628, avg=15478.42, stdev=2186.97 00:31:45.094 clat (usec): min=179, max=412, avg=296.39, stdev=39.54 00:31:45.094 lat (usec): min=192, max=428, avg=311.87, stdev=39.54 00:31:45.094 clat percentiles (usec): 00:31:45.094 | 1.00th=[ 241], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:31:45.094 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:31:45.094 | 70.00th=[ 302], 80.00th=[ 322], 90.00th=[ 367], 95.00th=[ 375], 00:31:45.094 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 408], 99.95th=[ 412], 00:31:45.094 | 99.99th=[ 412] 00:31:45.094 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:45.094 slat (nsec): min=11675, max=89967, avg=22039.71, stdev=6288.02 00:31:45.094 clat (usec): min=108, max=7777, avg=224.39, stdev=206.51 00:31:45.094 lat (usec): min=139, max=7810, avg=246.43, stdev=207.00 00:31:45.094 clat percentiles (usec): 00:31:45.094 | 1.00th=[ 128], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 182], 00:31:45.094 | 30.00th=[ 196], 40.00th=[ 206], 50.00th=[ 219], 60.00th=[ 231], 00:31:45.094 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 281], 00:31:45.094 | 99.00th=[ 297], 99.50th=[ 326], 99.90th=[ 3163], 99.95th=[ 3752], 00:31:45.094 | 99.99th=[ 7767] 00:31:45.094 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:31:45.094 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:45.094 lat (usec) : 250=43.51%, 500=56.29%, 750=0.03%, 1000=0.03% 00:31:45.094 lat (msec) : 2=0.06%, 4=0.06%, 10=0.03% 00:31:45.094 cpu : usr=1.50%, sys=5.40%, ctx=3616, majf=0, minf=13 00:31:45.094 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.094 issued rwts: total=1567,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.094 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:45.094 job2: (groupid=0, jobs=1): err= 0: pid=92943: Wed May 15 00:57:48 2024 00:31:45.094 read: IOPS=2708, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:31:45.094 slat (nsec): min=13893, max=44778, avg=17422.70, stdev=2851.45 00:31:45.094 clat (usec): min=144, max=543, avg=168.55, stdev=12.25 00:31:45.094 lat (usec): min=160, max=559, avg=185.97, stdev=12.66 00:31:45.094 clat percentiles (usec): 00:31:45.094 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:31:45.095 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:31:45.095 | 70.00th=[ 174], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 186], 00:31:45.095 | 99.00th=[ 194], 99.50th=[ 198], 99.90th=[ 277], 99.95th=[ 289], 00:31:45.095 | 99.99th=[ 545] 00:31:45.095 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:31:45.095 slat (usec): min=20, max=118, avg=25.44, stdev= 6.24 00:31:45.095 clat (usec): min=94, max=1948, avg=132.43, stdev=35.46 00:31:45.095 lat (usec): min=134, max=1975, avg=157.87, stdev=36.38 00:31:45.095 clat percentiles (usec): 00:31:45.095 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 124], 00:31:45.095 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 133], 00:31:45.095 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 149], 00:31:45.095 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 231], 99.95th=[ 644], 00:31:45.095 | 99.99th=[ 1942] 00:31:45.095 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:31:45.095 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:45.095 lat (usec) : 100=0.02%, 250=99.86%, 500=0.07%, 750=0.03% 00:31:45.095 lat (msec) : 2=0.02% 00:31:45.095 cpu : usr=2.30%, sys=9.40%, ctx=5788, majf=0, minf=15 00:31:45.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.095 issued rwts: total=2711,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:45.095 job3: (groupid=0, jobs=1): err= 0: pid=92944: Wed May 15 00:57:48 2024 00:31:45.095 read: IOPS=1855, BW=7421KiB/s (7599kB/s)(7428KiB/1001msec) 00:31:45.095 slat (nsec): min=11952, max=40059, avg=15251.89, stdev=2098.15 00:31:45.095 clat (usec): min=161, max=772, avg=276.93, stdev=53.95 00:31:45.095 lat (usec): min=176, max=787, avg=292.18, stdev=53.82 00:31:45.095 clat percentiles (usec): 00:31:45.095 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 251], 00:31:45.095 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:31:45.095 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 351], 95.00th=[ 363], 00:31:45.095 | 99.00th=[ 383], 99.50th=[ 396], 99.90th=[ 404], 99.95th=[ 775], 00:31:45.095 | 99.99th=[ 775] 00:31:45.095 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:45.095 slat (usec): min=11, max=106, avg=20.38, stdev= 4.04 00:31:45.095 clat (usec): min=103, max=423, avg=199.89, stdev=53.41 00:31:45.095 lat (usec): min=133, max=444, avg=220.27, stdev=52.81 00:31:45.095 clat percentiles (usec): 00:31:45.095 | 1.00th=[ 123], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 143], 00:31:45.095 | 30.00th=[ 153], 40.00th=[ 174], 50.00th=[ 194], 60.00th=[ 225], 00:31:45.095 | 70.00th=[ 243], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:31:45.095 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 318], 99.95th=[ 318], 00:31:45.095 | 99.99th=[ 424] 00:31:45.095 bw ( KiB/s): min= 8432, max= 8432, per=20.61%, avg=8432.00, stdev= 0.00, samples=1 00:31:45.095 iops : min= 2108, max= 2108, avg=2108.00, stdev= 0.00, samples=1 00:31:45.095 lat (usec) : 250=48.12%, 500=51.86%, 1000=0.03% 00:31:45.095 cpu : usr=1.90%, sys=5.10%, ctx=3907, majf=0, minf=8 00:31:45.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.095 issued rwts: total=1857,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:45.095 00:31:45.095 Run status group 0 (all jobs): 00:31:45.095 READ: bw=35.2MiB/s (37.0MB/s), 6262KiB/s-11.3MiB/s (6412kB/s-11.8MB/s), io=35.3MiB (37.0MB), run=1001-1001msec 00:31:45.095 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:31:45.095 00:31:45.095 Disk stats (read/write): 00:31:45.095 nvme0n1: ios=2610/2571, merge=0/0, ticks=460/364, in_queue=824, util=89.07% 00:31:45.095 nvme0n2: ios=1585/1545, merge=0/0, ticks=476/333, in_queue=809, util=87.88% 00:31:45.095 nvme0n3: ios=2443/2560, merge=0/0, ticks=430/371, in_queue=801, util=89.40% 00:31:45.095 nvme0n4: ios=1536/1928, merge=0/0, ticks=411/370, in_queue=781, util=89.77% 00:31:45.095 00:57:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:45.095 [global] 00:31:45.095 thread=1 00:31:45.095 invalidate=1 00:31:45.095 rw=write 00:31:45.095 time_based=1 00:31:45.095 runtime=1 00:31:45.095 ioengine=libaio 00:31:45.095 direct=1 00:31:45.095 bs=4096 00:31:45.095 iodepth=128 00:31:45.095 norandommap=0 00:31:45.095 numjobs=1 00:31:45.095 00:31:45.095 verify_dump=1 00:31:45.095 verify_backlog=512 00:31:45.095 verify_state_save=0 00:31:45.095 do_verify=1 00:31:45.095 verify=crc32c-intel 00:31:45.095 [job0] 00:31:45.095 filename=/dev/nvme0n1 00:31:45.095 [job1] 00:31:45.095 filename=/dev/nvme0n2 00:31:45.095 [job2] 00:31:45.095 filename=/dev/nvme0n3 00:31:45.095 [job3] 00:31:45.095 filename=/dev/nvme0n4 00:31:45.095 Could not set queue depth (nvme0n1) 00:31:45.095 Could not set queue depth (nvme0n2) 00:31:45.095 Could not set queue depth (nvme0n3) 00:31:45.095 Could not set queue depth (nvme0n4) 00:31:45.095 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:45.095 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:45.095 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:45.095 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:45.095 fio-3.35 00:31:45.095 Starting 4 threads 00:31:46.472 00:31:46.472 job0: (groupid=0, jobs=1): err= 0: pid=93002: Wed May 15 00:57:49 2024 00:31:46.472 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(21.8MiB/1001msec) 00:31:46.472 slat (usec): min=7, max=2729, avg=86.85, stdev=404.10 00:31:46.472 clat (usec): min=346, max=13534, avg=11491.45, stdev=1061.28 00:31:46.472 lat (usec): min=2616, max=14579, avg=11578.30, stdev=992.45 00:31:46.472 clat percentiles (usec): 00:31:46.472 | 1.00th=[ 5800], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11338], 00:31:46.472 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:31:46.472 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12387], 00:31:46.472 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13566], 99.95th=[13566], 00:31:46.472 | 99.99th=[13566] 00:31:46.472 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:31:46.472 slat (usec): min=9, max=2758, avg=84.03, stdev=347.93 00:31:46.472 clat (usec): min=8476, max=13603, avg=11062.96, stdev=1104.50 00:31:46.472 lat (usec): min=8992, max=13622, avg=11146.99, stdev=1106.38 00:31:46.472 clat percentiles (usec): 00:31:46.472 | 1.00th=[ 9241], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[ 9896], 00:31:46.472 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10945], 60.00th=[11731], 00:31:46.472 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:31:46.472 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13566], 99.95th=[13566], 00:31:46.472 | 99.99th=[13566] 00:31:46.472 bw ( KiB/s): min=23672, max=23672, per=36.53%, avg=23672.00, stdev= 0.00, samples=1 00:31:46.472 iops : min= 5918, max= 5918, avg=5918.00, stdev= 0.00, samples=1 00:31:46.472 lat (usec) : 500=0.01% 00:31:46.472 lat (msec) : 4=0.29%, 10=15.57%, 20=84.13% 00:31:46.472 cpu : usr=4.00%, sys=15.30%, ctx=544, majf=0, minf=13 00:31:46.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:46.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.472 issued rwts: total=5593,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.472 job1: (groupid=0, jobs=1): err= 0: pid=93003: Wed May 15 00:57:49 2024 00:31:46.472 read: IOPS=2420, BW=9684KiB/s (9916kB/s)(9732KiB/1005msec) 00:31:46.472 slat (usec): min=6, max=14844, avg=176.76, stdev=913.66 00:31:46.472 clat (usec): min=2028, max=35368, avg=23062.10, stdev=3878.96 00:31:46.472 lat (usec): min=6784, max=37704, avg=23238.86, stdev=3783.71 00:31:46.472 clat percentiles (usec): 00:31:46.472 | 1.00th=[ 7373], 5.00th=[18220], 10.00th=[20841], 20.00th=[21365], 00:31:46.472 | 30.00th=[21890], 40.00th=[21890], 50.00th=[22414], 60.00th=[23200], 00:31:46.472 | 70.00th=[23725], 80.00th=[24511], 90.00th=[27395], 95.00th=[31065], 00:31:46.472 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:31:46.472 | 99.99th=[35390] 00:31:46.472 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:31:46.472 slat (usec): min=18, max=9286, avg=214.08, stdev=1105.67 00:31:46.472 clat (usec): min=12519, max=42919, avg=27416.37, stdev=7929.34 00:31:46.472 lat (usec): min=16066, max=42964, avg=27630.46, stdev=7911.01 00:31:46.472 clat percentiles (usec): 00:31:46.472 | 1.00th=[16057], 5.00th=[16909], 10.00th=[17171], 20.00th=[17957], 00:31:46.472 | 30.00th=[20055], 40.00th=[23725], 50.00th=[28443], 60.00th=[31327], 00:31:46.472 | 70.00th=[34866], 80.00th=[35914], 90.00th=[36439], 95.00th=[37487], 00:31:46.472 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:31:46.472 | 99.99th=[42730] 00:31:46.472 bw ( KiB/s): min= 8968, max=11512, per=15.80%, avg=10240.00, stdev=1798.88, samples=2 00:31:46.472 iops : min= 2242, max= 2878, avg=2560.00, stdev=449.72, samples=2 00:31:46.472 lat (msec) : 4=0.02%, 10=0.64%, 20=18.35%, 50=80.99% 00:31:46.472 cpu : usr=3.19%, sys=7.67%, ctx=183, majf=0, minf=19 00:31:46.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:31:46.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.472 issued rwts: total=2433,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.472 job2: (groupid=0, jobs=1): err= 0: pid=93004: Wed May 15 00:57:49 2024 00:31:46.472 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:31:46.472 slat (usec): min=7, max=8391, avg=159.34, stdev=773.52 00:31:46.472 clat (usec): min=10991, max=36067, avg=19486.75, stdev=3165.71 00:31:46.472 lat (usec): min=11018, max=36085, avg=19646.09, stdev=3235.41 00:31:46.472 clat percentiles (usec): 00:31:46.472 | 1.00th=[12780], 5.00th=[15795], 10.00th=[16581], 20.00th=[17957], 00:31:46.472 | 30.00th=[18220], 40.00th=[18220], 50.00th=[18482], 60.00th=[19006], 00:31:46.472 | 70.00th=[19792], 80.00th=[21365], 90.00th=[23462], 95.00th=[25297], 00:31:46.472 | 99.00th=[31589], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:31:46.472 | 99.99th=[35914] 00:31:46.472 write: IOPS=3025, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1007msec); 0 zone resets 00:31:46.472 slat (usec): min=12, max=7570, avg=185.30, stdev=649.08 00:31:46.472 clat (usec): min=5579, max=37674, avg=25326.50, stdev=4837.10 00:31:46.472 lat (usec): min=8604, max=39466, avg=25511.79, stdev=4859.43 00:31:46.472 clat percentiles (usec): 00:31:46.472 | 1.00th=[11338], 5.00th=[17695], 10.00th=[17957], 20.00th=[21365], 00:31:46.472 | 30.00th=[23987], 40.00th=[25035], 50.00th=[26346], 60.00th=[26870], 00:31:46.472 | 70.00th=[27395], 80.00th=[28967], 90.00th=[31327], 95.00th=[32637], 00:31:46.472 | 99.00th=[34866], 99.50th=[34866], 99.90th=[37487], 99.95th=[37487], 00:31:46.472 | 99.99th=[37487] 00:31:46.472 bw ( KiB/s): min=11072, max=12288, per=18.03%, avg=11680.00, stdev=859.84, samples=2 00:31:46.472 iops : min= 2768, max= 3072, avg=2920.00, stdev=214.96, samples=2 00:31:46.472 lat (msec) : 10=0.30%, 20=43.64%, 50=56.05% 00:31:46.472 cpu : usr=2.78%, sys=10.83%, ctx=420, majf=0, minf=13 00:31:46.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:31:46.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.472 issued rwts: total=2560,3047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.472 job3: (groupid=0, jobs=1): err= 0: pid=93005: Wed May 15 00:57:49 2024 00:31:46.472 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:31:46.472 slat (usec): min=8, max=3269, avg=101.35, stdev=475.29 00:31:46.472 clat (usec): min=10098, max=16076, avg=13359.60, stdev=764.12 00:31:46.472 lat (usec): min=10883, max=18492, avg=13460.95, stdev=633.75 00:31:46.472 clat percentiles (usec): 00:31:46.472 | 1.00th=[10552], 5.00th=[11469], 10.00th=[12780], 20.00th=[13042], 00:31:46.472 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:31:46.472 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14091], 95.00th=[14353], 00:31:46.472 | 99.00th=[14877], 99.50th=[15008], 99.90th=[15533], 99.95th=[16057], 00:31:46.472 | 99.99th=[16057] 00:31:46.472 write: IOPS=5063, BW=19.8MiB/s (20.7MB/s)(19.8MiB/1002msec); 0 zone resets 00:31:46.472 slat (usec): min=12, max=3108, avg=97.44, stdev=411.25 00:31:46.472 clat (usec): min=1561, max=15790, avg=12809.41, stdev=1658.12 00:31:46.473 lat (usec): min=1613, max=15812, avg=12906.85, stdev=1658.14 00:31:46.473 clat percentiles (usec): 00:31:46.473 | 1.00th=[ 5669], 5.00th=[11076], 10.00th=[11338], 20.00th=[11600], 00:31:46.473 | 30.00th=[11731], 40.00th=[11994], 50.00th=[13173], 60.00th=[13698], 00:31:46.473 | 70.00th=[14091], 80.00th=[14222], 90.00th=[14484], 95.00th=[14615], 00:31:46.473 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15795], 99.95th=[15795], 00:31:46.473 | 99.99th=[15795] 00:31:46.473 bw ( KiB/s): min=19096, max=20480, per=30.54%, avg=19788.00, stdev=978.64, samples=2 00:31:46.473 iops : min= 4774, max= 5120, avg=4947.00, stdev=244.66, samples=2 00:31:46.473 lat (msec) : 2=0.17%, 4=0.08%, 10=0.76%, 20=98.99% 00:31:46.473 cpu : usr=5.09%, sys=13.29%, ctx=494, majf=0, minf=5 00:31:46.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:46.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.473 issued rwts: total=4608,5074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.473 00:31:46.473 Run status group 0 (all jobs): 00:31:46.473 READ: bw=58.9MiB/s (61.8MB/s), 9684KiB/s-21.8MiB/s (9916kB/s-22.9MB/s), io=59.4MiB (62.2MB), run=1001-1007msec 00:31:46.473 WRITE: bw=63.3MiB/s (66.4MB/s), 9.95MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=63.7MiB (66.8MB), run=1001-1007msec 00:31:46.473 00:31:46.473 Disk stats (read/write): 00:31:46.473 nvme0n1: ios=4673/5120, merge=0/0, ticks=12199/12209, in_queue=24408, util=89.07% 00:31:46.473 nvme0n2: ios=2097/2176, merge=0/0, ticks=11081/14223, in_queue=25304, util=89.60% 00:31:46.473 nvme0n3: ios=2326/2560, merge=0/0, ticks=21859/30050, in_queue=51909, util=89.86% 00:31:46.473 nvme0n4: ios=4102/4289, merge=0/0, ticks=12384/12028, in_queue=24412, util=90.11% 00:31:46.473 00:57:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:46.473 [global] 00:31:46.473 thread=1 00:31:46.473 invalidate=1 00:31:46.473 rw=randwrite 00:31:46.473 time_based=1 00:31:46.473 runtime=1 00:31:46.473 ioengine=libaio 00:31:46.473 direct=1 00:31:46.473 bs=4096 00:31:46.473 iodepth=128 00:31:46.473 norandommap=0 00:31:46.473 numjobs=1 00:31:46.473 00:31:46.473 verify_dump=1 00:31:46.473 verify_backlog=512 00:31:46.473 verify_state_save=0 00:31:46.473 do_verify=1 00:31:46.473 verify=crc32c-intel 00:31:46.473 [job0] 00:31:46.473 filename=/dev/nvme0n1 00:31:46.473 [job1] 00:31:46.473 filename=/dev/nvme0n2 00:31:46.473 [job2] 00:31:46.473 filename=/dev/nvme0n3 00:31:46.473 [job3] 00:31:46.473 filename=/dev/nvme0n4 00:31:46.473 Could not set queue depth (nvme0n1) 00:31:46.473 Could not set queue depth (nvme0n2) 00:31:46.473 Could not set queue depth (nvme0n3) 00:31:46.473 Could not set queue depth (nvme0n4) 00:31:46.473 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:46.473 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:46.473 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:46.473 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:46.473 fio-3.35 00:31:46.473 Starting 4 threads 00:31:47.848 00:31:47.848 job0: (groupid=0, jobs=1): err= 0: pid=93059: Wed May 15 00:57:50 2024 00:31:47.848 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:31:47.848 slat (usec): min=8, max=3480, avg=85.17, stdev=442.76 00:31:47.848 clat (usec): min=8560, max=15265, avg=11420.20, stdev=806.48 00:31:47.848 lat (usec): min=8584, max=15298, avg=11505.36, stdev=861.76 00:31:47.848 clat percentiles (usec): 00:31:47.848 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11076], 00:31:47.848 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:31:47.848 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12125], 95.00th=[12518], 00:31:47.848 | 99.00th=[14091], 99.50th=[14484], 99.90th=[14746], 99.95th=[15008], 00:31:47.848 | 99.99th=[15270] 00:31:47.848 write: IOPS=5762, BW=22.5MiB/s (23.6MB/s)(22.6MiB/1003msec); 0 zone resets 00:31:47.848 slat (usec): min=11, max=3230, avg=82.70, stdev=389.90 00:31:47.848 clat (usec): min=329, max=14349, avg=10800.24, stdev=1340.55 00:31:47.848 lat (usec): min=2734, max=14886, avg=10882.95, stdev=1321.95 00:31:47.848 clat percentiles (usec): 00:31:47.848 | 1.00th=[ 6456], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9503], 00:31:47.848 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:31:47.848 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12125], 00:31:47.848 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13829], 99.95th=[14091], 00:31:47.848 | 99.99th=[14353] 00:31:47.848 bw ( KiB/s): min=21520, max=23704, per=34.95%, avg=22612.00, stdev=1544.32, samples=2 00:31:47.848 iops : min= 5380, max= 5926, avg=5653.00, stdev=386.08, samples=2 00:31:47.848 lat (usec) : 500=0.01% 00:31:47.848 lat (msec) : 4=0.37%, 10=13.37%, 20=86.25% 00:31:47.848 cpu : usr=5.49%, sys=14.77%, ctx=403, majf=0, minf=9 00:31:47.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:47.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:47.848 issued rwts: total=5632,5780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:47.848 job1: (groupid=0, jobs=1): err= 0: pid=93060: Wed May 15 00:57:50 2024 00:31:47.848 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:31:47.848 slat (usec): min=2, max=14583, avg=193.59, stdev=942.06 00:31:47.848 clat (usec): min=15005, max=36244, avg=23788.84, stdev=3353.30 00:31:47.848 lat (usec): min=15019, max=36281, avg=23982.43, stdev=3432.41 00:31:47.848 clat percentiles (usec): 00:31:47.848 | 1.00th=[15795], 5.00th=[17957], 10.00th=[19530], 20.00th=[21890], 00:31:47.848 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:31:47.848 | 70.00th=[24249], 80.00th=[25297], 90.00th=[27657], 95.00th=[30016], 00:31:47.848 | 99.00th=[33817], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:31:47.848 | 99.99th=[36439] 00:31:47.848 write: IOPS=2702, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1011msec); 0 zone resets 00:31:47.848 slat (usec): min=4, max=11705, avg=176.94, stdev=797.07 00:31:47.848 clat (usec): min=10439, max=39908, avg=24338.98, stdev=3375.57 00:31:47.848 lat (usec): min=10447, max=40706, avg=24515.92, stdev=3462.01 00:31:47.848 clat percentiles (usec): 00:31:47.848 | 1.00th=[12649], 5.00th=[19006], 10.00th=[21365], 20.00th=[22938], 00:31:47.848 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24511], 60.00th=[24773], 00:31:47.848 | 70.00th=[25297], 80.00th=[25822], 90.00th=[26346], 95.00th=[30802], 00:31:47.848 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36439], 99.95th=[37487], 00:31:47.848 | 99.99th=[40109] 00:31:47.848 bw ( KiB/s): min= 9064, max=11776, per=16.11%, avg=10420.00, stdev=1917.67, samples=2 00:31:47.848 iops : min= 2266, max= 2944, avg=2605.00, stdev=479.42, samples=2 00:31:47.848 lat (msec) : 20=8.71%, 50=91.29% 00:31:47.848 cpu : usr=2.97%, sys=6.83%, ctx=865, majf=0, minf=15 00:31:47.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:47.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:47.848 issued rwts: total=2560,2732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:47.848 job2: (groupid=0, jobs=1): err= 0: pid=93061: Wed May 15 00:57:50 2024 00:31:47.848 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:31:47.848 slat (usec): min=3, max=11161, avg=194.83, stdev=928.18 00:31:47.848 clat (usec): min=15725, max=34317, avg=23723.84, stdev=2906.39 00:31:47.848 lat (usec): min=15915, max=34759, avg=23918.67, stdev=3009.54 00:31:47.848 clat percentiles (usec): 00:31:47.848 | 1.00th=[16712], 5.00th=[17957], 10.00th=[19792], 20.00th=[22414], 00:31:47.848 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:31:47.848 | 70.00th=[24249], 80.00th=[24773], 90.00th=[28181], 95.00th=[28967], 00:31:47.848 | 99.00th=[32113], 99.50th=[33162], 99.90th=[34341], 99.95th=[34341], 00:31:47.848 | 99.99th=[34341] 00:31:47.848 write: IOPS=2703, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1012msec); 0 zone resets 00:31:47.848 slat (usec): min=4, max=10003, avg=175.96, stdev=753.01 00:31:47.848 clat (usec): min=9506, max=36712, avg=24379.82, stdev=3099.53 00:31:47.848 lat (usec): min=13511, max=36725, avg=24555.78, stdev=3171.79 00:31:47.848 clat percentiles (usec): 00:31:47.848 | 1.00th=[15139], 5.00th=[19006], 10.00th=[21103], 20.00th=[22414], 00:31:47.848 | 30.00th=[23462], 40.00th=[24249], 50.00th=[24511], 60.00th=[25035], 00:31:47.848 | 70.00th=[25297], 80.00th=[25822], 90.00th=[26870], 95.00th=[30278], 00:31:47.848 | 99.00th=[33817], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:31:47.848 | 99.99th=[36963] 00:31:47.848 bw ( KiB/s): min= 9064, max=11800, per=16.12%, avg=10432.00, stdev=1934.64, samples=2 00:31:47.848 iops : min= 2266, max= 2950, avg=2608.00, stdev=483.66, samples=2 00:31:47.848 lat (msec) : 10=0.02%, 20=8.55%, 50=91.43% 00:31:47.848 cpu : usr=2.37%, sys=7.81%, ctx=869, majf=0, minf=13 00:31:47.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:47.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:47.849 issued rwts: total=2560,2736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:47.849 job3: (groupid=0, jobs=1): err= 0: pid=93062: Wed May 15 00:57:50 2024 00:31:47.849 read: IOPS=4872, BW=19.0MiB/s (20.0MB/s)(19.1MiB/1003msec) 00:31:47.849 slat (usec): min=7, max=6175, avg=98.80, stdev=458.28 00:31:47.849 clat (usec): min=1073, max=20109, avg=12788.62, stdev=1710.10 00:31:47.849 lat (usec): min=3234, max=20130, avg=12887.41, stdev=1744.13 00:31:47.849 clat percentiles (usec): 00:31:47.849 | 1.00th=[ 6390], 5.00th=[10552], 10.00th=[11338], 20.00th=[11994], 00:31:47.849 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:31:47.849 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14746], 95.00th=[15533], 00:31:47.849 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18744], 99.95th=[19006], 00:31:47.849 | 99.99th=[20055] 00:31:47.849 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:31:47.849 slat (usec): min=11, max=6682, avg=93.27, stdev=454.33 00:31:47.849 clat (usec): min=7317, max=19098, avg=12566.48, stdev=1449.81 00:31:47.849 lat (usec): min=7361, max=19617, avg=12659.74, stdev=1487.06 00:31:47.849 clat percentiles (usec): 00:31:47.849 | 1.00th=[ 8225], 5.00th=[10421], 10.00th=[11076], 20.00th=[11731], 00:31:47.849 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:31:47.849 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[15139], 00:31:47.849 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:31:47.849 | 99.99th=[19006] 00:31:47.849 bw ( KiB/s): min=20480, max=20480, per=31.66%, avg=20480.00, stdev= 0.00, samples=2 00:31:47.849 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:31:47.849 lat (msec) : 2=0.01%, 4=0.28%, 10=3.05%, 20=96.65%, 50=0.01% 00:31:47.849 cpu : usr=4.69%, sys=15.27%, ctx=533, majf=0, minf=15 00:31:47.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:47.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:47.849 issued rwts: total=4887,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:47.849 00:31:47.849 Run status group 0 (all jobs): 00:31:47.849 READ: bw=60.4MiB/s (63.3MB/s), 9.88MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=61.1MiB (64.1MB), run=1003-1012msec 00:31:47.849 WRITE: bw=63.2MiB/s (66.2MB/s), 10.6MiB/s-22.5MiB/s (11.1MB/s-23.6MB/s), io=63.9MiB (67.0MB), run=1003-1012msec 00:31:47.849 00:31:47.849 Disk stats (read/write): 00:31:47.849 nvme0n1: ios=4704/5120, merge=0/0, ticks=15731/15263, in_queue=30994, util=88.34% 00:31:47.849 nvme0n2: ios=2073/2417, merge=0/0, ticks=23618/26922, in_queue=50540, util=87.70% 00:31:47.849 nvme0n3: ios=2048/2446, merge=0/0, ticks=23242/27379, in_queue=50621, util=87.99% 00:31:47.849 nvme0n4: ios=4096/4396, merge=0/0, ticks=25167/24017, in_queue=49184, util=89.69% 00:31:47.849 00:57:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:47.849 00:57:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=93075 00:31:47.849 00:57:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:47.849 00:57:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:47.849 [global] 00:31:47.849 thread=1 00:31:47.849 invalidate=1 00:31:47.849 rw=read 00:31:47.849 time_based=1 00:31:47.849 runtime=10 00:31:47.849 ioengine=libaio 00:31:47.849 direct=1 00:31:47.849 bs=4096 00:31:47.849 iodepth=1 00:31:47.849 norandommap=1 00:31:47.849 numjobs=1 00:31:47.849 00:31:47.849 [job0] 00:31:47.849 filename=/dev/nvme0n1 00:31:47.849 [job1] 00:31:47.849 filename=/dev/nvme0n2 00:31:47.849 [job2] 00:31:47.849 filename=/dev/nvme0n3 00:31:47.849 [job3] 00:31:47.849 filename=/dev/nvme0n4 00:31:47.849 Could not set queue depth (nvme0n1) 00:31:47.849 Could not set queue depth (nvme0n2) 00:31:47.849 Could not set queue depth (nvme0n3) 00:31:47.849 Could not set queue depth (nvme0n4) 00:31:47.849 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.849 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.849 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.849 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.849 fio-3.35 00:31:47.849 Starting 4 threads 00:31:51.211 00:57:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:51.211 fio: pid=93124, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:31:51.211 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=64086016, buflen=4096 00:31:51.211 00:57:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:51.211 fio: pid=93123, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:31:51.211 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=67641344, buflen=4096 00:31:51.211 00:57:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:51.211 00:57:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:51.486 fio: pid=93121, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:31:51.486 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=58494976, buflen=4096 00:31:51.486 00:57:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:51.486 00:57:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:51.745 fio: pid=93122, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:31:51.745 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=66080768, buflen=4096 00:31:51.745 00:31:51.745 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93121: Wed May 15 00:57:54 2024 00:31:51.745 read: IOPS=4120, BW=16.1MiB/s (16.9MB/s)(55.8MiB/3466msec) 00:31:51.745 slat (usec): min=11, max=12825, avg=19.36, stdev=167.91 00:31:51.745 clat (usec): min=3, max=7947, avg=221.70, stdev=128.90 00:31:51.745 lat (usec): min=146, max=13043, avg=241.06, stdev=211.65 00:31:51.745 clat percentiles (usec): 00:31:51.745 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:31:51.745 | 30.00th=[ 163], 40.00th=[ 204], 50.00th=[ 251], 60.00th=[ 255], 00:31:51.745 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 269], 95.00th=[ 277], 00:31:51.745 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 529], 99.95th=[ 979], 00:31:51.745 | 99.99th=[ 7635] 00:31:51.745 bw ( KiB/s): min=13880, max=22408, per=24.03%, avg=15825.33, stdev=3264.75, samples=6 00:31:51.746 iops : min= 3470, max= 5602, avg=3956.33, stdev=816.19, samples=6 00:31:51.746 lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.02%, 250=48.66% 00:31:51.746 lat (usec) : 500=51.18%, 750=0.05%, 1000=0.01% 00:31:51.746 lat (msec) : 2=0.01%, 4=0.02%, 10=0.02% 00:31:51.746 cpu : usr=1.13%, sys=5.95%, ctx=14331, majf=0, minf=1 00:31:51.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:51.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.746 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.746 issued rwts: total=14282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:51.746 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93122: Wed May 15 00:57:54 2024 00:31:51.746 read: IOPS=4244, BW=16.6MiB/s (17.4MB/s)(63.0MiB/3801msec) 00:31:51.746 slat (usec): min=8, max=13925, avg=19.19, stdev=202.02 00:31:51.746 clat (usec): min=125, max=8252, avg=214.86, stdev=92.10 00:31:51.746 lat (usec): min=140, max=14213, avg=234.05, stdev=222.85 00:31:51.746 clat percentiles (usec): 00:31:51.746 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:31:51.746 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 247], 60.00th=[ 255], 00:31:51.746 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 277], 00:31:51.746 | 99.00th=[ 289], 99.50th=[ 334], 99.90th=[ 652], 99.95th=[ 1074], 00:31:51.746 | 99.99th=[ 2769] 00:31:51.746 bw ( KiB/s): min=14216, max=22816, per=25.30%, avg=16660.71, stdev=3332.15, samples=7 00:31:51.746 iops : min= 3554, max= 5704, avg=4165.14, stdev=833.00, samples=7 00:31:51.746 lat (usec) : 250=53.45%, 500=46.29%, 750=0.17%, 1000=0.02% 00:31:51.746 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01% 00:31:51.746 cpu : usr=1.26%, sys=5.76%, ctx=16160, majf=0, minf=1 00:31:51.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:51.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.746 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.746 issued rwts: total=16134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:51.746 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93123: Wed May 15 00:57:54 2024 00:31:51.746 read: IOPS=5128, BW=20.0MiB/s (21.0MB/s)(64.5MiB/3220msec) 00:31:51.746 slat (usec): min=13, max=12806, avg=16.53, stdev=114.33 00:31:51.746 clat (usec): min=149, max=2064, avg=177.16, stdev=31.25 00:31:51.746 lat (usec): min=166, max=13007, avg=193.70, stdev=118.79 00:31:51.746 clat percentiles (usec): 00:31:51.746 | 1.00th=[ 161], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:31:51.746 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:31:51.746 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 194], 00:31:51.746 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 302], 99.95th=[ 570], 00:31:51.746 | 99.99th=[ 1991] 00:31:51.746 bw ( KiB/s): min=20456, max=20888, per=31.43%, avg=20697.33, stdev=168.83, samples=6 00:31:51.746 iops : min= 5114, max= 5222, avg=5174.33, stdev=42.21, samples=6 00:31:51.746 lat (usec) : 250=99.84%, 500=0.09%, 750=0.02%, 1000=0.01% 00:31:51.746 lat (msec) : 2=0.02%, 4=0.01% 00:31:51.746 cpu : usr=1.49%, sys=6.37%, ctx=16519, majf=0, minf=1 00:31:51.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:51.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.746 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.746 issued rwts: total=16515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:51.746 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93124: Wed May 15 00:57:54 2024 00:31:51.746 read: IOPS=5257, BW=20.5MiB/s (21.5MB/s)(61.1MiB/2976msec) 00:31:51.746 slat (usec): min=13, max=135, avg=16.23, stdev= 2.32 00:31:51.746 clat (usec): min=140, max=2089, avg=172.42, stdev=22.68 00:31:51.746 lat (usec): min=158, max=2117, avg=188.65, stdev=22.92 00:31:51.746 clat percentiles (usec): 00:31:51.746 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:31:51.746 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 174], 00:31:51.746 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 184], 95.00th=[ 188], 00:31:51.746 | 99.00th=[ 198], 99.50th=[ 206], 99.90th=[ 383], 99.95th=[ 519], 00:31:51.746 | 99.99th=[ 889] 00:31:51.746 bw ( KiB/s): min=20744, max=21408, per=32.06%, avg=21113.60, stdev=244.45, samples=5 00:31:51.746 iops : min= 5186, max= 5352, avg=5278.40, stdev=61.11, samples=5 00:31:51.746 lat (usec) : 250=99.58%, 500=0.35%, 750=0.03%, 1000=0.02% 00:31:51.746 lat (msec) : 4=0.01% 00:31:51.746 cpu : usr=1.68%, sys=6.69%, ctx=15667, majf=0, minf=1 00:31:51.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:51.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.746 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.746 issued rwts: total=15647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:51.746 00:31:51.746 Run status group 0 (all jobs): 00:31:51.746 READ: bw=64.3MiB/s (67.4MB/s), 16.1MiB/s-20.5MiB/s (16.9MB/s-21.5MB/s), io=244MiB (256MB), run=2976-3801msec 00:31:51.746 00:31:51.746 Disk stats (read/write): 00:31:51.746 nvme0n1: ios=13711/0, merge=0/0, ticks=3086/0, in_queue=3086, util=94.99% 00:31:51.746 nvme0n2: ios=15126/0, merge=0/0, ticks=3308/0, in_queue=3308, util=95.37% 00:31:51.746 nvme0n3: ios=15993/0, merge=0/0, ticks=2879/0, in_queue=2879, util=96.27% 00:31:51.746 nvme0n4: ios=15102/0, merge=0/0, ticks=2695/0, in_queue=2695, util=96.79% 00:31:51.746 00:57:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:51.746 00:57:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:52.005 00:57:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:52.005 00:57:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:52.263 00:57:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:52.263 00:57:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:52.522 00:57:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:52.522 00:57:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:52.780 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:52.780 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 93075 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:53.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # local i=0 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:53.039 nvmf hotplug test: fio failed as expected 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1228 -- # return 0 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:53.039 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:53.299 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:53.299 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:53.299 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:53.299 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:53.299 00:57:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:53.299 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:53.299 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:31:53.299 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:53.299 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:31:53.299 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:53.299 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:53.558 rmmod nvme_tcp 00:31:53.558 rmmod nvme_fabrics 00:31:53.558 rmmod nvme_keyring 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 92591 ']' 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 92591 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' -z 92591 ']' 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # kill -0 92591 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # uname 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 92591 00:31:53.558 killing process with pid 92591 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 92591' 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # kill 92591 00:31:53.558 [2024-05-15 00:57:56.667116] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:53.558 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@971 -- # wait 92591 00:31:53.818 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:53.818 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:53.818 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:53.818 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:53.818 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:53.818 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.818 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:53.818 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.818 00:57:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:53.818 00:31:53.818 real 0m19.465s 00:31:53.818 user 1m14.110s 00:31:53.818 sys 0m9.438s 00:31:53.818 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:53.818 00:57:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:53.818 ************************************ 00:31:53.818 END TEST nvmf_fio_target 00:31:53.818 ************************************ 00:31:53.818 00:57:56 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:31:53.818 00:57:56 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:31:53.818 00:57:56 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:53.818 00:57:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:53.818 ************************************ 00:31:53.818 START TEST nvmf_bdevio 00:31:53.818 ************************************ 00:31:53.818 00:57:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:31:53.818 * Looking for test storage... 00:31:53.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:53.818 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:53.819 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:54.078 Cannot find device "nvmf_tgt_br" 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:54.078 Cannot find device "nvmf_tgt_br2" 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:54.078 Cannot find device "nvmf_tgt_br" 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:54.078 Cannot find device "nvmf_tgt_br2" 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:54.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:54.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:54.078 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:54.079 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:54.079 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:54.079 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:54.079 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:54.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:31:54.338 00:31:54.338 --- 10.0.0.2 ping statistics --- 00:31:54.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.338 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:54.338 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:54.338 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:31:54.338 00:31:54.338 --- 10.0.0.3 ping statistics --- 00:31:54.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.338 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:54.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:31:54.338 00:31:54.338 --- 10.0.0.1 ping statistics --- 00:31:54.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.338 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=93441 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 93441 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # '[' -z 93441 ']' 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:54.338 00:57:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.338 [2024-05-15 00:57:57.491069] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:54.338 [2024-05-15 00:57:57.491201] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.597 [2024-05-15 00:57:57.632909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:54.597 [2024-05-15 00:57:57.742261] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.597 [2024-05-15 00:57:57.742338] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.597 [2024-05-15 00:57:57.742364] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.597 [2024-05-15 00:57:57.742374] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.597 [2024-05-15 00:57:57.742384] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.597 [2024-05-15 00:57:57.742549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:54.597 [2024-05-15 00:57:57.742695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:54.597 [2024-05-15 00:57:57.743195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:54.597 [2024-05-15 00:57:57.743203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@861 -- # return 0 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:55.535 [2024-05-15 00:57:58.624299] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:55.535 Malloc0 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:55.535 [2024-05-15 00:57:58.697781] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:55.535 [2024-05-15 00:57:58.698232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:55.535 { 00:31:55.535 "params": { 00:31:55.535 "name": "Nvme$subsystem", 00:31:55.535 "trtype": "$TEST_TRANSPORT", 00:31:55.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:55.535 "adrfam": "ipv4", 00:31:55.535 "trsvcid": "$NVMF_PORT", 00:31:55.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:55.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:55.535 "hdgst": ${hdgst:-false}, 00:31:55.535 "ddgst": ${ddgst:-false} 00:31:55.535 }, 00:31:55.535 "method": "bdev_nvme_attach_controller" 00:31:55.535 } 00:31:55.535 EOF 00:31:55.535 )") 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:31:55.535 00:57:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:55.535 "params": { 00:31:55.535 "name": "Nvme1", 00:31:55.535 "trtype": "tcp", 00:31:55.535 "traddr": "10.0.0.2", 00:31:55.535 "adrfam": "ipv4", 00:31:55.535 "trsvcid": "4420", 00:31:55.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:55.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:55.535 "hdgst": false, 00:31:55.535 "ddgst": false 00:31:55.535 }, 00:31:55.535 "method": "bdev_nvme_attach_controller" 00:31:55.535 }' 00:31:55.535 [2024-05-15 00:57:58.751335] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:55.535 [2024-05-15 00:57:58.751440] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93495 ] 00:31:55.794 [2024-05-15 00:57:58.895642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:55.794 [2024-05-15 00:57:59.017653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.794 [2024-05-15 00:57:59.017833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.794 [2024-05-15 00:57:59.017848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.052 I/O targets: 00:31:56.052 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:56.052 00:31:56.052 00:31:56.052 CUnit - A unit testing framework for C - Version 2.1-3 00:31:56.052 http://cunit.sourceforge.net/ 00:31:56.052 00:31:56.052 00:31:56.052 Suite: bdevio tests on: Nvme1n1 00:31:56.052 Test: blockdev write read block ...passed 00:31:56.053 Test: blockdev write zeroes read block ...passed 00:31:56.053 Test: blockdev write zeroes read no split ...passed 00:31:56.053 Test: blockdev write zeroes read split ...passed 00:31:56.053 Test: blockdev write zeroes read split partial ...passed 00:31:56.053 Test: blockdev reset ...[2024-05-15 00:57:59.319418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:56.053 [2024-05-15 00:57:59.319555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2372860 (9): Bad file descriptor 00:31:56.053 [2024-05-15 00:57:59.334702] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:56.053 passed 00:31:56.053 Test: blockdev write read 8 blocks ...passed 00:31:56.053 Test: blockdev write read size > 128k ...passed 00:31:56.053 Test: blockdev write read invalid size ...passed 00:31:56.311 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:56.311 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:56.311 Test: blockdev write read max offset ...passed 00:31:56.311 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:56.311 Test: blockdev writev readv 8 blocks ...passed 00:31:56.311 Test: blockdev writev readv 30 x 1block ...passed 00:31:56.311 Test: blockdev writev readv block ...passed 00:31:56.311 Test: blockdev writev readv size > 128k ...passed 00:31:56.311 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:56.311 Test: blockdev comparev and writev ...[2024-05-15 00:57:59.507240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.311 [2024-05-15 00:57:59.507303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.311 [2024-05-15 00:57:59.507324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.311 [2024-05-15 00:57:59.507335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.311 [2024-05-15 00:57:59.507732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.311 [2024-05-15 00:57:59.507759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.311 [2024-05-15 00:57:59.507776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.311 [2024-05-15 00:57:59.507786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.311 [2024-05-15 00:57:59.508090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.311 [2024-05-15 00:57:59.508115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.311 [2024-05-15 00:57:59.508133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.311 [2024-05-15 00:57:59.508143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.311 [2024-05-15 00:57:59.508540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.311 [2024-05-15 00:57:59.508567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.311 [2024-05-15 00:57:59.508584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.311 [2024-05-15 00:57:59.508606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.311 passed 00:31:56.311 Test: blockdev nvme passthru rw ...passed 00:31:56.311 Test: blockdev nvme passthru vendor specific ...[2024-05-15 00:57:59.591078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:56.311 [2024-05-15 00:57:59.591136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.311 [2024-05-15 00:57:59.591266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:56.311 [2024-05-15 00:57:59.591283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.311 [2024-05-15 00:57:59.591391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:56.311 [2024-05-15 00:57:59.591419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.311 [2024-05-15 00:57:59.591535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:56.311 [2024-05-15 00:57:59.591559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.311 passed 00:31:56.570 Test: blockdev nvme admin passthru ...passed 00:31:56.570 Test: blockdev copy ...passed 00:31:56.570 00:31:56.570 Run Summary: Type Total Ran Passed Failed Inactive 00:31:56.570 suites 1 1 n/a 0 0 00:31:56.570 tests 23 23 23 0 0 00:31:56.570 asserts 152 152 152 0 n/a 00:31:56.570 00:31:56.570 Elapsed time = 0.908 seconds 00:31:56.570 00:57:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:56.570 00:57:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.570 00:57:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.570 00:57:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.570 00:57:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:56.570 00:57:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:56.570 00:57:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:56.570 00:57:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:56.830 rmmod nvme_tcp 00:31:56.830 rmmod nvme_fabrics 00:31:56.830 rmmod nvme_keyring 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 93441 ']' 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 93441 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' -z 93441 ']' 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # kill -0 93441 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # uname 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 93441 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:31:56.830 killing process with pid 93441 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # echo 'killing process with pid 93441' 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # kill 93441 00:31:56.830 00:57:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@971 -- # wait 93441 00:31:56.830 [2024-05-15 00:57:59.959719] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:57.090 00:58:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:57.090 00:58:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:57.090 00:58:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:57.090 00:58:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:57.090 00:58:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:57.090 00:58:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.090 00:58:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:57.090 00:58:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.090 00:58:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:57.090 00:31:57.090 real 0m3.263s 00:31:57.090 user 0m11.906s 00:31:57.090 sys 0m0.820s 00:31:57.090 00:58:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:57.090 00:58:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.090 ************************************ 00:31:57.090 END TEST nvmf_bdevio 00:31:57.090 ************************************ 00:31:57.090 00:58:00 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:31:57.090 00:58:00 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:31:57.090 00:58:00 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:57.090 00:58:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:57.090 ************************************ 00:31:57.090 START TEST nvmf_auth_target 00:31:57.090 ************************************ 00:31:57.090 00:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:31:57.090 * Looking for test storage... 00:31:57.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:57.349 00:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:57.349 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:31:57.349 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.349 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.349 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.349 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.349 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.349 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:57.350 Cannot find device "nvmf_tgt_br" 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:57.350 Cannot find device "nvmf_tgt_br2" 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:57.350 Cannot find device "nvmf_tgt_br" 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:57.350 Cannot find device "nvmf_tgt_br2" 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:57.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:57.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:57.350 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:57.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:31:57.610 00:31:57.610 --- 10.0.0.2 ping statistics --- 00:31:57.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.610 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:57.610 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:57.610 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:31:57.610 00:31:57.610 --- 10.0.0.3 ping statistics --- 00:31:57.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.610 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:57.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:31:57.610 00:31:57.610 --- 10.0.0.1 ping statistics --- 00:31:57.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.610 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=93677 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 93677 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 93677 ']' 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:57.610 00:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:58.546 00:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:58.546 00:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:31:58.546 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:58.546 00:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:58.546 00:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:58.804 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:58.804 00:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=93721 00:31:58.804 00:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:31:58.804 00:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:31:58.804 00:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:31:58.804 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:31:58.804 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:58.804 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:31:58.804 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=27483dd2e48e67de7977ef3fe180f459ccf2b8d412e9a594 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xdg 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 27483dd2e48e67de7977ef3fe180f459ccf2b8d412e9a594 0 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 27483dd2e48e67de7977ef3fe180f459ccf2b8d412e9a594 0 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=27483dd2e48e67de7977ef3fe180f459ccf2b8d412e9a594 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xdg 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xdg 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.xdg 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=94c805ad45bbe167c05be6b8e964a72d 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SW4 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 94c805ad45bbe167c05be6b8e964a72d 1 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 94c805ad45bbe167c05be6b8e964a72d 1 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=94c805ad45bbe167c05be6b8e964a72d 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:31:58.805 00:58:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SW4 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SW4 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.SW4 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ea9d0a5d3af254e024cccb0678c5ec25ee205d3accd86065 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.AZB 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ea9d0a5d3af254e024cccb0678c5ec25ee205d3accd86065 2 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ea9d0a5d3af254e024cccb0678c5ec25ee205d3accd86065 2 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ea9d0a5d3af254e024cccb0678c5ec25ee205d3accd86065 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.AZB 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.AZB 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.AZB 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0e3c039cb82660a8c22feb35a0dbb9d3904d4baca9130c0be4c653ed4451f509 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Gv5 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0e3c039cb82660a8c22feb35a0dbb9d3904d4baca9130c0be4c653ed4451f509 3 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0e3c039cb82660a8c22feb35a0dbb9d3904d4baca9130c0be4c653ed4451f509 3 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0e3c039cb82660a8c22feb35a0dbb9d3904d4baca9130c0be4c653ed4451f509 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:31:58.805 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:31:59.064 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Gv5 00:31:59.064 00:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Gv5 00:31:59.064 00:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.Gv5 00:31:59.064 00:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 93677 00:31:59.064 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 93677 ']' 00:31:59.064 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.064 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:59.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.064 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.064 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:59.064 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:31:59.324 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:59.324 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:31:59.324 00:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 93721 /var/tmp/host.sock 00:31:59.324 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 93721 ']' 00:31:59.324 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/host.sock 00:31:59.324 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:59.324 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:31:59.324 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:59.324 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.583 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:59.583 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:31:59.583 00:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:31:59.583 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.583 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.583 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.583 00:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:31:59.583 00:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xdg 00:31:59.583 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.583 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.583 00:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.583 00:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.xdg 00:31:59.583 00:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.xdg 00:31:59.842 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:31:59.842 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.SW4 00:31:59.842 00:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.842 00:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.842 00:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.842 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.SW4 00:31:59.842 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.SW4 00:32:00.100 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:32:00.100 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AZB 00:32:00.100 00:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.100 00:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.100 00:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.100 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.AZB 00:32:00.100 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.AZB 00:32:00.362 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:32:00.362 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Gv5 00:32:00.362 00:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.362 00:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.362 00:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.362 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Gv5 00:32:00.362 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Gv5 00:32:00.646 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:32:00.646 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:32:00.646 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:00.646 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:00.646 00:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:00.905 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:32:00.905 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:00.905 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:00.905 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:00.905 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:00.905 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:32:00.905 00:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.905 00:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.905 00:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.905 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:00.905 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:01.165 00:32:01.165 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:01.165 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:01.165 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:01.733 { 00:32:01.733 "auth": { 00:32:01.733 "dhgroup": "null", 00:32:01.733 "digest": "sha256", 00:32:01.733 "state": "completed" 00:32:01.733 }, 00:32:01.733 "cntlid": 1, 00:32:01.733 "listen_address": { 00:32:01.733 "adrfam": "IPv4", 00:32:01.733 "traddr": "10.0.0.2", 00:32:01.733 "trsvcid": "4420", 00:32:01.733 "trtype": "TCP" 00:32:01.733 }, 00:32:01.733 "peer_address": { 00:32:01.733 "adrfam": "IPv4", 00:32:01.733 "traddr": "10.0.0.1", 00:32:01.733 "trsvcid": "34192", 00:32:01.733 "trtype": "TCP" 00:32:01.733 }, 00:32:01.733 "qid": 0, 00:32:01.733 "state": "enabled" 00:32:01.733 } 00:32:01.733 ]' 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:01.733 00:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:01.992 00:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:07.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:07.265 00:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:07.265 00:32:07.265 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:07.265 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:07.265 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:07.265 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.265 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:07.265 00:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.265 00:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.265 00:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.265 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:07.265 { 00:32:07.265 "auth": { 00:32:07.265 "dhgroup": "null", 00:32:07.265 "digest": "sha256", 00:32:07.265 "state": "completed" 00:32:07.265 }, 00:32:07.265 "cntlid": 3, 00:32:07.265 "listen_address": { 00:32:07.265 "adrfam": "IPv4", 00:32:07.265 "traddr": "10.0.0.2", 00:32:07.265 "trsvcid": "4420", 00:32:07.265 "trtype": "TCP" 00:32:07.265 }, 00:32:07.265 "peer_address": { 00:32:07.265 "adrfam": "IPv4", 00:32:07.265 "traddr": "10.0.0.1", 00:32:07.265 "trsvcid": "34220", 00:32:07.265 "trtype": "TCP" 00:32:07.265 }, 00:32:07.265 "qid": 0, 00:32:07.265 "state": "enabled" 00:32:07.265 } 00:32:07.265 ]' 00:32:07.265 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:07.524 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:07.524 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:07.524 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:32:07.524 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:07.524 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:07.524 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:07.524 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:07.783 00:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:32:08.351 00:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:08.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:08.351 00:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:08.351 00:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.351 00:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:08.351 00:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.351 00:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:08.351 00:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:08.351 00:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:08.918 00:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:32:08.918 00:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:08.918 00:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:08.918 00:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:08.918 00:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:08.918 00:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:32:08.918 00:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.918 00:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:08.918 00:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.918 00:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:08.918 00:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:09.179 00:32:09.179 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:09.179 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:09.179 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:09.438 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.438 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:09.438 00:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.438 00:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.438 00:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.438 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:09.438 { 00:32:09.438 "auth": { 00:32:09.438 "dhgroup": "null", 00:32:09.438 "digest": "sha256", 00:32:09.438 "state": "completed" 00:32:09.438 }, 00:32:09.438 "cntlid": 5, 00:32:09.438 "listen_address": { 00:32:09.438 "adrfam": "IPv4", 00:32:09.438 "traddr": "10.0.0.2", 00:32:09.438 "trsvcid": "4420", 00:32:09.438 "trtype": "TCP" 00:32:09.438 }, 00:32:09.438 "peer_address": { 00:32:09.438 "adrfam": "IPv4", 00:32:09.438 "traddr": "10.0.0.1", 00:32:09.438 "trsvcid": "53784", 00:32:09.438 "trtype": "TCP" 00:32:09.438 }, 00:32:09.438 "qid": 0, 00:32:09.438 "state": "enabled" 00:32:09.438 } 00:32:09.438 ]' 00:32:09.438 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:09.438 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:09.438 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:09.438 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:32:09.438 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:09.697 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:09.697 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:09.697 00:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:09.955 00:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:32:10.522 00:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:10.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:10.522 00:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:10.522 00:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.522 00:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:10.522 00:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.522 00:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:10.522 00:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:10.522 00:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:10.781 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:32:10.781 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:10.781 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:10.781 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:10.781 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:10.781 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:32:10.781 00:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.781 00:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:11.039 00:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.040 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:11.040 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:11.298 00:32:11.298 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:11.298 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:11.298 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:11.560 { 00:32:11.560 "auth": { 00:32:11.560 "dhgroup": "null", 00:32:11.560 "digest": "sha256", 00:32:11.560 "state": "completed" 00:32:11.560 }, 00:32:11.560 "cntlid": 7, 00:32:11.560 "listen_address": { 00:32:11.560 "adrfam": "IPv4", 00:32:11.560 "traddr": "10.0.0.2", 00:32:11.560 "trsvcid": "4420", 00:32:11.560 "trtype": "TCP" 00:32:11.560 }, 00:32:11.560 "peer_address": { 00:32:11.560 "adrfam": "IPv4", 00:32:11.560 "traddr": "10.0.0.1", 00:32:11.560 "trsvcid": "53812", 00:32:11.560 "trtype": "TCP" 00:32:11.560 }, 00:32:11.560 "qid": 0, 00:32:11.560 "state": "enabled" 00:32:11.560 } 00:32:11.560 ]' 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:11.560 00:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:11.819 00:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:32:12.756 00:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:12.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:12.756 00:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:12.756 00:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.756 00:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:12.756 00:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.756 00:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:32:12.756 00:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:12.756 00:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:12.756 00:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:13.014 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:32:13.014 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:13.014 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:13.014 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:32:13.014 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:13.014 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:32:13.014 00:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.014 00:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:13.014 00:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.015 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:13.015 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:13.273 00:32:13.273 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:13.273 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:13.273 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:13.537 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.537 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:13.537 00:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.537 00:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:13.537 00:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.537 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:13.537 { 00:32:13.537 "auth": { 00:32:13.537 "dhgroup": "ffdhe2048", 00:32:13.537 "digest": "sha256", 00:32:13.537 "state": "completed" 00:32:13.537 }, 00:32:13.537 "cntlid": 9, 00:32:13.537 "listen_address": { 00:32:13.537 "adrfam": "IPv4", 00:32:13.537 "traddr": "10.0.0.2", 00:32:13.538 "trsvcid": "4420", 00:32:13.538 "trtype": "TCP" 00:32:13.538 }, 00:32:13.538 "peer_address": { 00:32:13.538 "adrfam": "IPv4", 00:32:13.538 "traddr": "10.0.0.1", 00:32:13.538 "trsvcid": "53836", 00:32:13.538 "trtype": "TCP" 00:32:13.538 }, 00:32:13.538 "qid": 0, 00:32:13.538 "state": "enabled" 00:32:13.538 } 00:32:13.538 ]' 00:32:13.538 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:13.538 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:13.538 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:13.538 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:13.538 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:13.805 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:13.805 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:13.805 00:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:14.063 00:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:32:14.631 00:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:14.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:14.631 00:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:14.631 00:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.631 00:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:14.631 00:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.631 00:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:14.631 00:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:14.631 00:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:14.889 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:32:14.889 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:14.889 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:14.889 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:32:14.889 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:14.889 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:32:14.889 00:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.889 00:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:14.889 00:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.889 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:14.889 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:15.456 00:32:15.456 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:15.456 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:15.456 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:15.456 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.456 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:15.456 00:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.456 00:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:15.715 00:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.715 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:15.715 { 00:32:15.715 "auth": { 00:32:15.715 "dhgroup": "ffdhe2048", 00:32:15.715 "digest": "sha256", 00:32:15.715 "state": "completed" 00:32:15.715 }, 00:32:15.715 "cntlid": 11, 00:32:15.715 "listen_address": { 00:32:15.715 "adrfam": "IPv4", 00:32:15.715 "traddr": "10.0.0.2", 00:32:15.715 "trsvcid": "4420", 00:32:15.715 "trtype": "TCP" 00:32:15.715 }, 00:32:15.715 "peer_address": { 00:32:15.715 "adrfam": "IPv4", 00:32:15.715 "traddr": "10.0.0.1", 00:32:15.715 "trsvcid": "53856", 00:32:15.715 "trtype": "TCP" 00:32:15.715 }, 00:32:15.715 "qid": 0, 00:32:15.715 "state": "enabled" 00:32:15.715 } 00:32:15.715 ]' 00:32:15.715 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:15.715 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:15.715 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:15.715 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:15.715 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:15.715 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:15.715 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:15.715 00:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:15.974 00:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:32:16.908 00:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:16.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:16.908 00:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:16.908 00:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.908 00:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:16.908 00:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.908 00:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:16.908 00:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:16.909 00:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:16.909 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:32:16.909 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:16.909 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:16.909 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:32:16.909 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:16.909 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:32:16.909 00:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.909 00:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:16.909 00:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.909 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:16.909 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:17.166 00:32:17.426 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:17.426 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:17.426 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:17.426 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.426 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:17.426 00:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:17.426 00:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.426 00:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:17.426 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:17.426 { 00:32:17.426 "auth": { 00:32:17.426 "dhgroup": "ffdhe2048", 00:32:17.426 "digest": "sha256", 00:32:17.426 "state": "completed" 00:32:17.426 }, 00:32:17.426 "cntlid": 13, 00:32:17.426 "listen_address": { 00:32:17.426 "adrfam": "IPv4", 00:32:17.426 "traddr": "10.0.0.2", 00:32:17.426 "trsvcid": "4420", 00:32:17.426 "trtype": "TCP" 00:32:17.426 }, 00:32:17.426 "peer_address": { 00:32:17.426 "adrfam": "IPv4", 00:32:17.426 "traddr": "10.0.0.1", 00:32:17.426 "trsvcid": "36106", 00:32:17.426 "trtype": "TCP" 00:32:17.426 }, 00:32:17.426 "qid": 0, 00:32:17.426 "state": "enabled" 00:32:17.426 } 00:32:17.426 ]' 00:32:17.426 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:17.685 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:17.685 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:17.685 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:17.685 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:17.685 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:17.685 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:17.685 00:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:17.942 00:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:32:18.506 00:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:18.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:18.506 00:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:18.506 00:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.506 00:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:18.506 00:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.506 00:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:18.506 00:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:18.506 00:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:18.764 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:32:18.764 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:18.764 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:18.764 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:32:18.764 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:18.764 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:32:18.764 00:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.764 00:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:18.764 00:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.764 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:18.764 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:19.330 00:32:19.330 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:19.330 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:19.331 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:19.588 { 00:32:19.588 "auth": { 00:32:19.588 "dhgroup": "ffdhe2048", 00:32:19.588 "digest": "sha256", 00:32:19.588 "state": "completed" 00:32:19.588 }, 00:32:19.588 "cntlid": 15, 00:32:19.588 "listen_address": { 00:32:19.588 "adrfam": "IPv4", 00:32:19.588 "traddr": "10.0.0.2", 00:32:19.588 "trsvcid": "4420", 00:32:19.588 "trtype": "TCP" 00:32:19.588 }, 00:32:19.588 "peer_address": { 00:32:19.588 "adrfam": "IPv4", 00:32:19.588 "traddr": "10.0.0.1", 00:32:19.588 "trsvcid": "36134", 00:32:19.588 "trtype": "TCP" 00:32:19.588 }, 00:32:19.588 "qid": 0, 00:32:19.588 "state": "enabled" 00:32:19.588 } 00:32:19.588 ]' 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:19.588 00:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:19.846 00:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:32:20.781 00:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:20.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:20.781 00:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:20.781 00:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.781 00:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.781 00:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.781 00:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:32:20.781 00:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:20.781 00:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:20.781 00:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:20.781 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:32:20.781 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:20.781 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:20.781 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:32:20.781 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:20.781 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:32:20.781 00:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.781 00:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.781 00:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.781 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:20.781 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:21.356 00:32:21.356 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:21.356 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:21.356 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:21.614 { 00:32:21.614 "auth": { 00:32:21.614 "dhgroup": "ffdhe3072", 00:32:21.614 "digest": "sha256", 00:32:21.614 "state": "completed" 00:32:21.614 }, 00:32:21.614 "cntlid": 17, 00:32:21.614 "listen_address": { 00:32:21.614 "adrfam": "IPv4", 00:32:21.614 "traddr": "10.0.0.2", 00:32:21.614 "trsvcid": "4420", 00:32:21.614 "trtype": "TCP" 00:32:21.614 }, 00:32:21.614 "peer_address": { 00:32:21.614 "adrfam": "IPv4", 00:32:21.614 "traddr": "10.0.0.1", 00:32:21.614 "trsvcid": "36160", 00:32:21.614 "trtype": "TCP" 00:32:21.614 }, 00:32:21.614 "qid": 0, 00:32:21.614 "state": "enabled" 00:32:21.614 } 00:32:21.614 ]' 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:21.614 00:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:21.872 00:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:32:22.817 00:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:22.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:22.817 00:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:22.817 00:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:22.817 00:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:22.817 00:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:22.817 00:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:22.817 00:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:22.817 00:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:22.817 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:32:22.817 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:22.817 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:22.817 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:32:22.817 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:22.817 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:32:22.817 00:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:22.817 00:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:22.817 00:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:22.817 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:22.817 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:23.113 00:32:23.372 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:23.372 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:23.372 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:23.631 { 00:32:23.631 "auth": { 00:32:23.631 "dhgroup": "ffdhe3072", 00:32:23.631 "digest": "sha256", 00:32:23.631 "state": "completed" 00:32:23.631 }, 00:32:23.631 "cntlid": 19, 00:32:23.631 "listen_address": { 00:32:23.631 "adrfam": "IPv4", 00:32:23.631 "traddr": "10.0.0.2", 00:32:23.631 "trsvcid": "4420", 00:32:23.631 "trtype": "TCP" 00:32:23.631 }, 00:32:23.631 "peer_address": { 00:32:23.631 "adrfam": "IPv4", 00:32:23.631 "traddr": "10.0.0.1", 00:32:23.631 "trsvcid": "36178", 00:32:23.631 "trtype": "TCP" 00:32:23.631 }, 00:32:23.631 "qid": 0, 00:32:23.631 "state": "enabled" 00:32:23.631 } 00:32:23.631 ]' 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:23.631 00:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:23.890 00:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:32:24.824 00:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:24.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:24.824 00:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:24.824 00:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:24.824 00:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:24.824 00:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:24.824 00:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:24.824 00:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:24.824 00:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:25.082 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:32:25.082 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:25.082 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:25.082 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:32:25.082 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:25.082 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:32:25.082 00:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:25.082 00:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:25.082 00:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:25.082 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:25.082 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:25.341 00:32:25.341 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:25.341 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:25.341 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:25.599 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.599 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:25.599 00:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:25.599 00:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:25.599 00:58:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:25.599 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:25.599 { 00:32:25.599 "auth": { 00:32:25.599 "dhgroup": "ffdhe3072", 00:32:25.599 "digest": "sha256", 00:32:25.599 "state": "completed" 00:32:25.599 }, 00:32:25.599 "cntlid": 21, 00:32:25.599 "listen_address": { 00:32:25.599 "adrfam": "IPv4", 00:32:25.599 "traddr": "10.0.0.2", 00:32:25.599 "trsvcid": "4420", 00:32:25.599 "trtype": "TCP" 00:32:25.599 }, 00:32:25.599 "peer_address": { 00:32:25.599 "adrfam": "IPv4", 00:32:25.599 "traddr": "10.0.0.1", 00:32:25.599 "trsvcid": "36208", 00:32:25.599 "trtype": "TCP" 00:32:25.599 }, 00:32:25.599 "qid": 0, 00:32:25.599 "state": "enabled" 00:32:25.599 } 00:32:25.599 ]' 00:32:25.599 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:25.599 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:25.599 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:25.857 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:25.858 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:25.858 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:25.858 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:25.858 00:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:26.116 00:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:32:26.682 00:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:26.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:26.682 00:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:26.682 00:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:26.682 00:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:26.682 00:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:26.682 00:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:26.682 00:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:26.682 00:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:26.940 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:32:26.940 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:26.940 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:26.940 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:32:26.940 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:26.940 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:32:26.940 00:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:26.940 00:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:26.940 00:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:26.940 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:26.940 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:27.238 00:32:27.239 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:27.239 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:27.239 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:27.497 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.497 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:27.497 00:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:27.497 00:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:27.497 00:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:27.497 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:27.497 { 00:32:27.497 "auth": { 00:32:27.497 "dhgroup": "ffdhe3072", 00:32:27.497 "digest": "sha256", 00:32:27.497 "state": "completed" 00:32:27.497 }, 00:32:27.497 "cntlid": 23, 00:32:27.497 "listen_address": { 00:32:27.497 "adrfam": "IPv4", 00:32:27.497 "traddr": "10.0.0.2", 00:32:27.497 "trsvcid": "4420", 00:32:27.497 "trtype": "TCP" 00:32:27.497 }, 00:32:27.497 "peer_address": { 00:32:27.497 "adrfam": "IPv4", 00:32:27.497 "traddr": "10.0.0.1", 00:32:27.497 "trsvcid": "49086", 00:32:27.497 "trtype": "TCP" 00:32:27.497 }, 00:32:27.497 "qid": 0, 00:32:27.497 "state": "enabled" 00:32:27.497 } 00:32:27.497 ]' 00:32:27.497 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:27.757 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:27.757 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:27.757 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:27.757 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:27.757 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:27.757 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:27.757 00:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:28.016 00:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:32:28.582 00:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:28.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:28.582 00:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:28.582 00:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:28.582 00:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:28.582 00:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:28.582 00:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:32:28.582 00:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:28.582 00:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:28.582 00:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:29.149 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:32:29.149 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:29.149 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:29.149 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:32:29.149 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:29.149 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:32:29.149 00:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:29.149 00:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.149 00:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:29.149 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:29.149 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:29.408 00:32:29.408 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:29.408 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:29.408 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:29.668 { 00:32:29.668 "auth": { 00:32:29.668 "dhgroup": "ffdhe4096", 00:32:29.668 "digest": "sha256", 00:32:29.668 "state": "completed" 00:32:29.668 }, 00:32:29.668 "cntlid": 25, 00:32:29.668 "listen_address": { 00:32:29.668 "adrfam": "IPv4", 00:32:29.668 "traddr": "10.0.0.2", 00:32:29.668 "trsvcid": "4420", 00:32:29.668 "trtype": "TCP" 00:32:29.668 }, 00:32:29.668 "peer_address": { 00:32:29.668 "adrfam": "IPv4", 00:32:29.668 "traddr": "10.0.0.1", 00:32:29.668 "trsvcid": "49112", 00:32:29.668 "trtype": "TCP" 00:32:29.668 }, 00:32:29.668 "qid": 0, 00:32:29.668 "state": "enabled" 00:32:29.668 } 00:32:29.668 ]' 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:29.668 00:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:29.927 00:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:32:30.863 00:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:30.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:30.863 00:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:30.863 00:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:30.863 00:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:30.863 00:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:30.863 00:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:30.863 00:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:30.863 00:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:31.121 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:32:31.121 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:31.121 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:31.121 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:32:31.121 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:31.121 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:32:31.121 00:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:31.121 00:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.121 00:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:31.121 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:31.121 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:31.411 00:32:31.411 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:31.411 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:31.411 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:31.669 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.669 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:31.669 00:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:31.669 00:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.669 00:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:31.669 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:31.669 { 00:32:31.669 "auth": { 00:32:31.669 "dhgroup": "ffdhe4096", 00:32:31.669 "digest": "sha256", 00:32:31.669 "state": "completed" 00:32:31.669 }, 00:32:31.669 "cntlid": 27, 00:32:31.669 "listen_address": { 00:32:31.669 "adrfam": "IPv4", 00:32:31.669 "traddr": "10.0.0.2", 00:32:31.669 "trsvcid": "4420", 00:32:31.669 "trtype": "TCP" 00:32:31.669 }, 00:32:31.669 "peer_address": { 00:32:31.669 "adrfam": "IPv4", 00:32:31.669 "traddr": "10.0.0.1", 00:32:31.669 "trsvcid": "49146", 00:32:31.669 "trtype": "TCP" 00:32:31.669 }, 00:32:31.669 "qid": 0, 00:32:31.669 "state": "enabled" 00:32:31.669 } 00:32:31.669 ]' 00:32:31.669 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:31.669 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:31.669 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:31.928 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:31.928 00:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:31.928 00:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:31.928 00:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:31.928 00:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:32.187 00:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:32:32.754 00:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:32.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:32.754 00:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:32.754 00:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.754 00:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:32.754 00:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.754 00:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:32.754 00:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:32.754 00:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:33.013 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:32:33.013 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:33.013 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:33.013 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:32:33.013 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:33.013 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:32:33.013 00:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:33.013 00:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:33.013 00:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:33.013 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:33.013 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:33.279 00:32:33.279 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:33.279 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:33.279 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:33.537 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.537 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:33.537 00:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:33.538 00:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:33.538 00:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:33.538 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:33.538 { 00:32:33.538 "auth": { 00:32:33.538 "dhgroup": "ffdhe4096", 00:32:33.538 "digest": "sha256", 00:32:33.538 "state": "completed" 00:32:33.538 }, 00:32:33.538 "cntlid": 29, 00:32:33.538 "listen_address": { 00:32:33.538 "adrfam": "IPv4", 00:32:33.538 "traddr": "10.0.0.2", 00:32:33.538 "trsvcid": "4420", 00:32:33.538 "trtype": "TCP" 00:32:33.538 }, 00:32:33.538 "peer_address": { 00:32:33.538 "adrfam": "IPv4", 00:32:33.538 "traddr": "10.0.0.1", 00:32:33.538 "trsvcid": "49168", 00:32:33.538 "trtype": "TCP" 00:32:33.538 }, 00:32:33.538 "qid": 0, 00:32:33.538 "state": "enabled" 00:32:33.538 } 00:32:33.538 ]' 00:32:33.538 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:33.796 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:33.796 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:33.796 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:33.796 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:33.796 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:33.796 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:33.796 00:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:34.054 00:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:32:34.990 00:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:34.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:34.990 00:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:34.990 00:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.990 00:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:34.990 00:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.990 00:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:34.990 00:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:34.990 00:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:35.249 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:32:35.249 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:35.249 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:35.249 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:32:35.249 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:35.249 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:32:35.249 00:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:35.249 00:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:35.249 00:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:35.249 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:35.249 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:35.508 00:32:35.508 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:35.508 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:35.508 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:35.767 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.767 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:35.767 00:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:35.767 00:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:35.767 00:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:35.767 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:35.767 { 00:32:35.767 "auth": { 00:32:35.767 "dhgroup": "ffdhe4096", 00:32:35.767 "digest": "sha256", 00:32:35.767 "state": "completed" 00:32:35.767 }, 00:32:35.767 "cntlid": 31, 00:32:35.767 "listen_address": { 00:32:35.767 "adrfam": "IPv4", 00:32:35.767 "traddr": "10.0.0.2", 00:32:35.767 "trsvcid": "4420", 00:32:35.767 "trtype": "TCP" 00:32:35.767 }, 00:32:35.767 "peer_address": { 00:32:35.767 "adrfam": "IPv4", 00:32:35.767 "traddr": "10.0.0.1", 00:32:35.767 "trsvcid": "49198", 00:32:35.767 "trtype": "TCP" 00:32:35.767 }, 00:32:35.767 "qid": 0, 00:32:35.767 "state": "enabled" 00:32:35.767 } 00:32:35.767 ]' 00:32:35.767 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:35.767 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:35.767 00:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:35.767 00:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:35.767 00:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:36.026 00:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:36.026 00:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:36.026 00:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:36.026 00:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:32:36.962 00:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:36.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:36.962 00:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:36.962 00:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.962 00:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:36.962 00:58:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:36.962 00:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.962 00:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:36.962 00:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:36.962 00:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:37.220 00:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:32:37.220 00:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:37.220 00:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:37.220 00:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:32:37.220 00:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:37.220 00:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:32:37.220 00:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:37.220 00:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:37.220 00:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:37.220 00:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:37.220 00:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:37.479 00:32:37.479 00:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:37.479 00:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:37.479 00:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:37.738 00:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.997 00:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:37.997 00:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:37.997 00:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:37.997 00:58:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:37.997 00:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:37.997 { 00:32:37.997 "auth": { 00:32:37.997 "dhgroup": "ffdhe6144", 00:32:37.997 "digest": "sha256", 00:32:37.997 "state": "completed" 00:32:37.997 }, 00:32:37.997 "cntlid": 33, 00:32:37.997 "listen_address": { 00:32:37.997 "adrfam": "IPv4", 00:32:37.997 "traddr": "10.0.0.2", 00:32:37.997 "trsvcid": "4420", 00:32:37.997 "trtype": "TCP" 00:32:37.997 }, 00:32:37.997 "peer_address": { 00:32:37.997 "adrfam": "IPv4", 00:32:37.997 "traddr": "10.0.0.1", 00:32:37.997 "trsvcid": "49414", 00:32:37.997 "trtype": "TCP" 00:32:37.997 }, 00:32:37.997 "qid": 0, 00:32:37.997 "state": "enabled" 00:32:37.997 } 00:32:37.997 ]' 00:32:37.997 00:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:37.997 00:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:37.997 00:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:37.997 00:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:37.997 00:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:37.997 00:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:37.997 00:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:37.997 00:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:38.255 00:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:32:38.823 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:38.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:38.823 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:38.823 00:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:38.823 00:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:38.823 00:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:38.823 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:38.823 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:38.823 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:39.094 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:32:39.094 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:39.094 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:39.094 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:32:39.094 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:39.094 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:32:39.094 00:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:39.094 00:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:39.094 00:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:39.094 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:39.094 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:39.661 00:32:39.661 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:39.661 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:39.661 00:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:39.920 00:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.920 00:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:39.920 00:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:39.920 00:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:39.920 00:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:39.920 00:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:39.920 { 00:32:39.920 "auth": { 00:32:39.920 "dhgroup": "ffdhe6144", 00:32:39.920 "digest": "sha256", 00:32:39.920 "state": "completed" 00:32:39.920 }, 00:32:39.920 "cntlid": 35, 00:32:39.920 "listen_address": { 00:32:39.920 "adrfam": "IPv4", 00:32:39.920 "traddr": "10.0.0.2", 00:32:39.920 "trsvcid": "4420", 00:32:39.920 "trtype": "TCP" 00:32:39.920 }, 00:32:39.920 "peer_address": { 00:32:39.920 "adrfam": "IPv4", 00:32:39.920 "traddr": "10.0.0.1", 00:32:39.920 "trsvcid": "49428", 00:32:39.920 "trtype": "TCP" 00:32:39.920 }, 00:32:39.920 "qid": 0, 00:32:39.920 "state": "enabled" 00:32:39.920 } 00:32:39.920 ]' 00:32:39.920 00:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:39.920 00:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:39.920 00:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:40.179 00:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:40.179 00:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:40.179 00:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:40.179 00:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:40.179 00:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:40.437 00:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:32:41.004 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:41.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:41.004 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:41.004 00:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:41.004 00:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:41.004 00:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:41.004 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:41.004 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:41.004 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:41.263 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:32:41.263 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:41.263 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:41.263 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:32:41.263 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:41.263 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:32:41.263 00:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:41.263 00:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:41.263 00:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:41.263 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:41.263 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:41.830 00:32:41.830 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:41.830 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:41.830 00:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:42.088 { 00:32:42.088 "auth": { 00:32:42.088 "dhgroup": "ffdhe6144", 00:32:42.088 "digest": "sha256", 00:32:42.088 "state": "completed" 00:32:42.088 }, 00:32:42.088 "cntlid": 37, 00:32:42.088 "listen_address": { 00:32:42.088 "adrfam": "IPv4", 00:32:42.088 "traddr": "10.0.0.2", 00:32:42.088 "trsvcid": "4420", 00:32:42.088 "trtype": "TCP" 00:32:42.088 }, 00:32:42.088 "peer_address": { 00:32:42.088 "adrfam": "IPv4", 00:32:42.088 "traddr": "10.0.0.1", 00:32:42.088 "trsvcid": "49458", 00:32:42.088 "trtype": "TCP" 00:32:42.088 }, 00:32:42.088 "qid": 0, 00:32:42.088 "state": "enabled" 00:32:42.088 } 00:32:42.088 ]' 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:42.088 00:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:42.654 00:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:32:43.221 00:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:43.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:43.221 00:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:43.221 00:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:43.221 00:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:43.221 00:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:43.221 00:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:43.221 00:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:43.221 00:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:43.480 00:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:32:43.480 00:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:43.480 00:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:43.480 00:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:32:43.480 00:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:43.480 00:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:32:43.480 00:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:43.480 00:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:43.480 00:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:43.480 00:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:43.480 00:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:44.047 00:32:44.047 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:44.047 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:44.047 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:44.305 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.305 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:44.305 00:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:44.305 00:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:44.305 00:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:44.305 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:44.305 { 00:32:44.305 "auth": { 00:32:44.305 "dhgroup": "ffdhe6144", 00:32:44.305 "digest": "sha256", 00:32:44.305 "state": "completed" 00:32:44.305 }, 00:32:44.305 "cntlid": 39, 00:32:44.305 "listen_address": { 00:32:44.305 "adrfam": "IPv4", 00:32:44.305 "traddr": "10.0.0.2", 00:32:44.305 "trsvcid": "4420", 00:32:44.305 "trtype": "TCP" 00:32:44.305 }, 00:32:44.305 "peer_address": { 00:32:44.305 "adrfam": "IPv4", 00:32:44.305 "traddr": "10.0.0.1", 00:32:44.305 "trsvcid": "49496", 00:32:44.305 "trtype": "TCP" 00:32:44.305 }, 00:32:44.305 "qid": 0, 00:32:44.305 "state": "enabled" 00:32:44.305 } 00:32:44.305 ]' 00:32:44.305 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:44.305 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:44.305 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:44.305 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:44.305 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:44.305 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:44.305 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:44.306 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:44.564 00:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:32:45.499 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:45.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:45.499 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:45.499 00:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:45.499 00:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.499 00:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:45.499 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:32:45.499 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:45.499 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:45.499 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:45.758 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:32:45.758 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:45.758 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:45.758 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:45.758 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:45.758 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:32:45.758 00:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:45.758 00:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.758 00:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:45.758 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:45.758 00:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:46.329 00:32:46.329 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:46.329 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:46.329 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:46.593 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.593 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:46.593 00:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.593 00:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.593 00:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.593 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:46.593 { 00:32:46.593 "auth": { 00:32:46.593 "dhgroup": "ffdhe8192", 00:32:46.593 "digest": "sha256", 00:32:46.593 "state": "completed" 00:32:46.593 }, 00:32:46.594 "cntlid": 41, 00:32:46.594 "listen_address": { 00:32:46.594 "adrfam": "IPv4", 00:32:46.594 "traddr": "10.0.0.2", 00:32:46.594 "trsvcid": "4420", 00:32:46.594 "trtype": "TCP" 00:32:46.594 }, 00:32:46.594 "peer_address": { 00:32:46.594 "adrfam": "IPv4", 00:32:46.594 "traddr": "10.0.0.1", 00:32:46.594 "trsvcid": "49534", 00:32:46.594 "trtype": "TCP" 00:32:46.594 }, 00:32:46.594 "qid": 0, 00:32:46.594 "state": "enabled" 00:32:46.594 } 00:32:46.594 ]' 00:32:46.594 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:46.594 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:46.594 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:46.852 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:46.852 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:46.852 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:46.852 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:46.852 00:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:47.111 00:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:32:47.677 00:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:47.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:47.677 00:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:47.677 00:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:47.677 00:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:47.935 00:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:47.935 00:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:47.935 00:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:47.935 00:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:47.935 00:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:32:47.935 00:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:47.935 00:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:47.935 00:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:47.935 00:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:47.935 00:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:32:47.935 00:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:47.935 00:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:47.935 00:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:47.935 00:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:47.935 00:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:48.870 00:32:48.870 00:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:48.870 00:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:48.870 00:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:48.870 00:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.870 00:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:48.870 00:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:48.870 00:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:48.870 00:58:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:48.870 00:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:48.870 { 00:32:48.870 "auth": { 00:32:48.870 "dhgroup": "ffdhe8192", 00:32:48.870 "digest": "sha256", 00:32:48.870 "state": "completed" 00:32:48.870 }, 00:32:48.870 "cntlid": 43, 00:32:48.870 "listen_address": { 00:32:48.870 "adrfam": "IPv4", 00:32:48.870 "traddr": "10.0.0.2", 00:32:48.870 "trsvcid": "4420", 00:32:48.870 "trtype": "TCP" 00:32:48.870 }, 00:32:48.870 "peer_address": { 00:32:48.870 "adrfam": "IPv4", 00:32:48.870 "traddr": "10.0.0.1", 00:32:48.870 "trsvcid": "59512", 00:32:48.870 "trtype": "TCP" 00:32:48.870 }, 00:32:48.870 "qid": 0, 00:32:48.870 "state": "enabled" 00:32:48.870 } 00:32:48.870 ]' 00:32:48.870 00:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:48.870 00:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:48.870 00:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:48.870 00:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:48.870 00:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:49.128 00:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:49.128 00:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:49.128 00:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:49.386 00:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:32:49.952 00:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:49.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:49.952 00:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:49.952 00:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:49.952 00:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:49.952 00:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:49.952 00:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:49.952 00:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:49.952 00:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:50.211 00:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:32:50.211 00:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:50.211 00:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:50.211 00:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:50.211 00:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:50.211 00:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:32:50.211 00:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.211 00:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.211 00:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.211 00:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:50.211 00:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:51.147 00:32:51.147 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:51.147 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:51.147 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:51.405 { 00:32:51.405 "auth": { 00:32:51.405 "dhgroup": "ffdhe8192", 00:32:51.405 "digest": "sha256", 00:32:51.405 "state": "completed" 00:32:51.405 }, 00:32:51.405 "cntlid": 45, 00:32:51.405 "listen_address": { 00:32:51.405 "adrfam": "IPv4", 00:32:51.405 "traddr": "10.0.0.2", 00:32:51.405 "trsvcid": "4420", 00:32:51.405 "trtype": "TCP" 00:32:51.405 }, 00:32:51.405 "peer_address": { 00:32:51.405 "adrfam": "IPv4", 00:32:51.405 "traddr": "10.0.0.1", 00:32:51.405 "trsvcid": "59548", 00:32:51.405 "trtype": "TCP" 00:32:51.405 }, 00:32:51.405 "qid": 0, 00:32:51.405 "state": "enabled" 00:32:51.405 } 00:32:51.405 ]' 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:51.405 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:51.664 00:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:32:52.598 00:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:52.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:52.598 00:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:52.598 00:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:52.598 00:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.598 00:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:52.598 00:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:52.598 00:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:52.598 00:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:52.856 00:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:32:52.856 00:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:52.856 00:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:52.856 00:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:52.856 00:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:52.856 00:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:32:52.856 00:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:52.856 00:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.856 00:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:52.856 00:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:52.856 00:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:53.424 00:32:53.424 00:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:53.424 00:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:53.424 00:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:53.688 00:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.688 00:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:53.688 00:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:53.688 00:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:53.688 00:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:53.688 00:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:53.688 { 00:32:53.688 "auth": { 00:32:53.688 "dhgroup": "ffdhe8192", 00:32:53.689 "digest": "sha256", 00:32:53.689 "state": "completed" 00:32:53.689 }, 00:32:53.689 "cntlid": 47, 00:32:53.689 "listen_address": { 00:32:53.689 "adrfam": "IPv4", 00:32:53.689 "traddr": "10.0.0.2", 00:32:53.689 "trsvcid": "4420", 00:32:53.689 "trtype": "TCP" 00:32:53.689 }, 00:32:53.689 "peer_address": { 00:32:53.689 "adrfam": "IPv4", 00:32:53.689 "traddr": "10.0.0.1", 00:32:53.689 "trsvcid": "59578", 00:32:53.689 "trtype": "TCP" 00:32:53.689 }, 00:32:53.689 "qid": 0, 00:32:53.689 "state": "enabled" 00:32:53.689 } 00:32:53.689 ]' 00:32:53.689 00:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:53.689 00:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:53.689 00:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:53.947 00:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:53.947 00:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:53.947 00:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:53.947 00:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:53.947 00:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:54.206 00:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:32:54.774 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:55.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:55.032 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:55.032 00:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:55.032 00:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:55.032 00:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:55.032 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:32:55.032 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.032 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:55.032 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:55.032 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:55.291 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:32:55.291 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:55.291 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:55.291 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:55.291 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:55.291 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:32:55.291 00:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:55.291 00:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:55.291 00:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:55.291 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:55.291 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:55.549 00:32:55.549 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:55.549 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:55.549 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:55.807 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.807 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:55.807 00:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:55.807 00:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:55.807 00:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:55.807 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:55.807 { 00:32:55.807 "auth": { 00:32:55.807 "dhgroup": "null", 00:32:55.807 "digest": "sha384", 00:32:55.807 "state": "completed" 00:32:55.807 }, 00:32:55.807 "cntlid": 49, 00:32:55.807 "listen_address": { 00:32:55.807 "adrfam": "IPv4", 00:32:55.807 "traddr": "10.0.0.2", 00:32:55.807 "trsvcid": "4420", 00:32:55.807 "trtype": "TCP" 00:32:55.807 }, 00:32:55.807 "peer_address": { 00:32:55.807 "adrfam": "IPv4", 00:32:55.807 "traddr": "10.0.0.1", 00:32:55.807 "trsvcid": "59610", 00:32:55.807 "trtype": "TCP" 00:32:55.807 }, 00:32:55.807 "qid": 0, 00:32:55.807 "state": "enabled" 00:32:55.807 } 00:32:55.807 ]' 00:32:55.807 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:55.807 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:55.807 00:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:55.807 00:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:32:55.807 00:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:56.065 00:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:56.065 00:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:56.065 00:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:56.322 00:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:32:56.890 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:56.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:56.890 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:56.890 00:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:56.890 00:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:57.149 00:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.149 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:57.149 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:57.149 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:57.408 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:32:57.408 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:57.408 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:57.408 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:57.408 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:57.408 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:32:57.408 00:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.408 00:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:57.408 00:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.408 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:57.408 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:32:57.667 00:32:57.667 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:57.667 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:57.667 00:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:57.925 00:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.925 00:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:57.925 00:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.925 00:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:57.925 00:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.925 00:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:57.925 { 00:32:57.925 "auth": { 00:32:57.925 "dhgroup": "null", 00:32:57.925 "digest": "sha384", 00:32:57.925 "state": "completed" 00:32:57.925 }, 00:32:57.925 "cntlid": 51, 00:32:57.925 "listen_address": { 00:32:57.925 "adrfam": "IPv4", 00:32:57.925 "traddr": "10.0.0.2", 00:32:57.925 "trsvcid": "4420", 00:32:57.925 "trtype": "TCP" 00:32:57.925 }, 00:32:57.925 "peer_address": { 00:32:57.925 "adrfam": "IPv4", 00:32:57.925 "traddr": "10.0.0.1", 00:32:57.925 "trsvcid": "40424", 00:32:57.925 "trtype": "TCP" 00:32:57.925 }, 00:32:57.925 "qid": 0, 00:32:57.925 "state": "enabled" 00:32:57.925 } 00:32:57.925 ]' 00:32:57.925 00:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:57.926 00:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:57.926 00:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:32:57.926 00:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:32:57.926 00:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:32:58.184 00:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:58.184 00:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:58.184 00:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:58.443 00:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:32:59.011 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:59.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:59.011 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:32:59.011 00:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:59.011 00:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:59.011 00:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:59.011 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:32:59.011 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:59.011 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:59.329 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:32:59.329 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:32:59.329 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:59.329 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:59.329 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:59.329 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:32:59.329 00:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:59.329 00:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:59.329 00:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:59.329 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:59.329 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:59.588 00:32:59.588 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:32:59.588 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:32:59.588 00:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:59.846 00:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.846 00:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:59.847 00:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:59.847 00:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:59.847 00:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:59.847 00:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:32:59.847 { 00:32:59.847 "auth": { 00:32:59.847 "dhgroup": "null", 00:32:59.847 "digest": "sha384", 00:32:59.847 "state": "completed" 00:32:59.847 }, 00:32:59.847 "cntlid": 53, 00:32:59.847 "listen_address": { 00:32:59.847 "adrfam": "IPv4", 00:32:59.847 "traddr": "10.0.0.2", 00:32:59.847 "trsvcid": "4420", 00:32:59.847 "trtype": "TCP" 00:32:59.847 }, 00:32:59.847 "peer_address": { 00:32:59.847 "adrfam": "IPv4", 00:32:59.847 "traddr": "10.0.0.1", 00:32:59.847 "trsvcid": "40462", 00:32:59.847 "trtype": "TCP" 00:32:59.847 }, 00:32:59.847 "qid": 0, 00:32:59.847 "state": "enabled" 00:32:59.847 } 00:32:59.847 ]' 00:32:59.847 00:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:32:59.847 00:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:59.847 00:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:00.105 00:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:33:00.106 00:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:00.106 00:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:00.106 00:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:00.106 00:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:00.364 00:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:33:01.299 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:01.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:01.299 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:01.299 00:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:01.299 00:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:01.299 00:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:01.299 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:01.299 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:33:01.299 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:33:01.558 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:33:01.558 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:01.558 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:01.558 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:33:01.558 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:01.558 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:33:01.558 00:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:01.558 00:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:01.558 00:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:01.558 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:01.558 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:01.817 00:33:01.817 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:01.817 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:01.817 00:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:02.077 00:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.077 00:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:02.077 00:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:02.077 00:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:02.077 00:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:02.077 00:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:02.077 { 00:33:02.077 "auth": { 00:33:02.077 "dhgroup": "null", 00:33:02.077 "digest": "sha384", 00:33:02.077 "state": "completed" 00:33:02.077 }, 00:33:02.077 "cntlid": 55, 00:33:02.077 "listen_address": { 00:33:02.077 "adrfam": "IPv4", 00:33:02.077 "traddr": "10.0.0.2", 00:33:02.077 "trsvcid": "4420", 00:33:02.077 "trtype": "TCP" 00:33:02.077 }, 00:33:02.077 "peer_address": { 00:33:02.077 "adrfam": "IPv4", 00:33:02.077 "traddr": "10.0.0.1", 00:33:02.077 "trsvcid": "40494", 00:33:02.077 "trtype": "TCP" 00:33:02.077 }, 00:33:02.077 "qid": 0, 00:33:02.077 "state": "enabled" 00:33:02.077 } 00:33:02.077 ]' 00:33:02.077 00:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:02.077 00:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:02.077 00:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:02.077 00:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:33:02.077 00:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:02.335 00:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:02.335 00:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:02.335 00:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:02.335 00:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:33:03.272 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:03.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:03.272 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:03.272 00:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.272 00:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:03.272 00:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.272 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:33:03.272 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:03.272 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:03.272 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:03.530 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:33:03.530 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:03.530 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:03.530 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:03.530 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:03.530 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:33:03.530 00:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.530 00:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:03.530 00:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.530 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:03.530 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:03.789 00:33:03.789 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:03.789 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:03.789 00:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:04.047 00:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.047 00:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:04.047 00:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:04.047 00:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:04.047 00:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:04.047 00:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:04.047 { 00:33:04.047 "auth": { 00:33:04.047 "dhgroup": "ffdhe2048", 00:33:04.047 "digest": "sha384", 00:33:04.047 "state": "completed" 00:33:04.047 }, 00:33:04.047 "cntlid": 57, 00:33:04.047 "listen_address": { 00:33:04.047 "adrfam": "IPv4", 00:33:04.047 "traddr": "10.0.0.2", 00:33:04.047 "trsvcid": "4420", 00:33:04.047 "trtype": "TCP" 00:33:04.047 }, 00:33:04.047 "peer_address": { 00:33:04.047 "adrfam": "IPv4", 00:33:04.047 "traddr": "10.0.0.1", 00:33:04.047 "trsvcid": "40506", 00:33:04.047 "trtype": "TCP" 00:33:04.047 }, 00:33:04.047 "qid": 0, 00:33:04.047 "state": "enabled" 00:33:04.047 } 00:33:04.047 ]' 00:33:04.047 00:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:04.047 00:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:04.047 00:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:04.047 00:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:04.047 00:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:04.047 00:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:04.047 00:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:04.048 00:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:04.305 00:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:33:05.153 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:05.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:05.154 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:05.154 00:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:05.154 00:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:05.154 00:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:05.154 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:05.154 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:05.154 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:05.411 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:33:05.411 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:05.411 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:05.411 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:05.411 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:05.411 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:33:05.411 00:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:05.411 00:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:05.411 00:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:05.411 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:05.411 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:05.977 00:33:05.977 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:05.977 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:05.977 00:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:06.236 { 00:33:06.236 "auth": { 00:33:06.236 "dhgroup": "ffdhe2048", 00:33:06.236 "digest": "sha384", 00:33:06.236 "state": "completed" 00:33:06.236 }, 00:33:06.236 "cntlid": 59, 00:33:06.236 "listen_address": { 00:33:06.236 "adrfam": "IPv4", 00:33:06.236 "traddr": "10.0.0.2", 00:33:06.236 "trsvcid": "4420", 00:33:06.236 "trtype": "TCP" 00:33:06.236 }, 00:33:06.236 "peer_address": { 00:33:06.236 "adrfam": "IPv4", 00:33:06.236 "traddr": "10.0.0.1", 00:33:06.236 "trsvcid": "40528", 00:33:06.236 "trtype": "TCP" 00:33:06.236 }, 00:33:06.236 "qid": 0, 00:33:06.236 "state": "enabled" 00:33:06.236 } 00:33:06.236 ]' 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:06.236 00:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:06.495 00:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:07.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:07.428 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:07.685 00:33:07.956 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:07.956 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:07.956 00:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:07.956 00:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.956 00:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:07.956 00:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.956 00:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:07.956 00:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.956 00:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:07.956 { 00:33:07.956 "auth": { 00:33:07.956 "dhgroup": "ffdhe2048", 00:33:07.956 "digest": "sha384", 00:33:07.956 "state": "completed" 00:33:07.956 }, 00:33:07.956 "cntlid": 61, 00:33:07.956 "listen_address": { 00:33:07.956 "adrfam": "IPv4", 00:33:07.956 "traddr": "10.0.0.2", 00:33:07.956 "trsvcid": "4420", 00:33:07.956 "trtype": "TCP" 00:33:07.956 }, 00:33:07.956 "peer_address": { 00:33:07.956 "adrfam": "IPv4", 00:33:07.956 "traddr": "10.0.0.1", 00:33:07.956 "trsvcid": "54792", 00:33:07.956 "trtype": "TCP" 00:33:07.956 }, 00:33:07.956 "qid": 0, 00:33:07.956 "state": "enabled" 00:33:07.956 } 00:33:07.956 ]' 00:33:07.956 00:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:08.238 00:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:08.238 00:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:08.238 00:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:08.238 00:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:08.238 00:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:08.238 00:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:08.238 00:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:08.497 00:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:33:09.063 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:09.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:09.063 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:09.063 00:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:09.063 00:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:09.063 00:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:09.063 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:09.063 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:09.063 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:09.322 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:33:09.322 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:09.322 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:09.322 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:09.322 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:09.322 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:33:09.322 00:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:09.322 00:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:09.322 00:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:09.322 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:09.322 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:09.580 00:33:09.581 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:09.581 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:09.581 00:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:10.145 00:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.145 00:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:10.145 00:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.145 00:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:10.145 00:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.145 00:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:10.145 { 00:33:10.145 "auth": { 00:33:10.145 "dhgroup": "ffdhe2048", 00:33:10.145 "digest": "sha384", 00:33:10.145 "state": "completed" 00:33:10.145 }, 00:33:10.145 "cntlid": 63, 00:33:10.145 "listen_address": { 00:33:10.145 "adrfam": "IPv4", 00:33:10.145 "traddr": "10.0.0.2", 00:33:10.145 "trsvcid": "4420", 00:33:10.145 "trtype": "TCP" 00:33:10.145 }, 00:33:10.145 "peer_address": { 00:33:10.145 "adrfam": "IPv4", 00:33:10.145 "traddr": "10.0.0.1", 00:33:10.145 "trsvcid": "54832", 00:33:10.145 "trtype": "TCP" 00:33:10.145 }, 00:33:10.145 "qid": 0, 00:33:10.145 "state": "enabled" 00:33:10.145 } 00:33:10.145 ]' 00:33:10.146 00:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:10.146 00:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:10.146 00:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:10.146 00:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:10.146 00:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:10.146 00:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:10.146 00:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:10.146 00:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:10.402 00:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:33:11.387 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:11.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:11.387 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:11.387 00:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.387 00:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:11.387 00:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.387 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:33:11.387 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:11.387 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:11.387 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:11.644 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:33:11.644 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:11.644 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:11.644 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:11.644 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:11.644 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:33:11.644 00:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.644 00:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:11.644 00:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.644 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:11.644 00:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:11.905 00:33:11.905 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:11.905 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:11.905 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:12.165 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.165 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:12.165 00:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:12.165 00:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:12.165 00:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:12.165 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:12.165 { 00:33:12.165 "auth": { 00:33:12.165 "dhgroup": "ffdhe3072", 00:33:12.165 "digest": "sha384", 00:33:12.165 "state": "completed" 00:33:12.165 }, 00:33:12.165 "cntlid": 65, 00:33:12.165 "listen_address": { 00:33:12.165 "adrfam": "IPv4", 00:33:12.165 "traddr": "10.0.0.2", 00:33:12.165 "trsvcid": "4420", 00:33:12.165 "trtype": "TCP" 00:33:12.165 }, 00:33:12.165 "peer_address": { 00:33:12.165 "adrfam": "IPv4", 00:33:12.165 "traddr": "10.0.0.1", 00:33:12.165 "trsvcid": "54860", 00:33:12.165 "trtype": "TCP" 00:33:12.165 }, 00:33:12.165 "qid": 0, 00:33:12.165 "state": "enabled" 00:33:12.165 } 00:33:12.165 ]' 00:33:12.165 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:12.165 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:12.165 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:12.165 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:12.165 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:12.424 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:12.424 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:12.424 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:12.682 00:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:33:13.252 00:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:13.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:13.252 00:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:13.252 00:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.253 00:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:13.253 00:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.253 00:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:13.253 00:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:13.253 00:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:13.517 00:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:33:13.517 00:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:13.517 00:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:13.517 00:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:13.517 00:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:13.517 00:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:33:13.517 00:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.517 00:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:13.517 00:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.517 00:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:13.517 00:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:14.104 00:33:14.104 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:14.104 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:14.104 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:14.375 { 00:33:14.375 "auth": { 00:33:14.375 "dhgroup": "ffdhe3072", 00:33:14.375 "digest": "sha384", 00:33:14.375 "state": "completed" 00:33:14.375 }, 00:33:14.375 "cntlid": 67, 00:33:14.375 "listen_address": { 00:33:14.375 "adrfam": "IPv4", 00:33:14.375 "traddr": "10.0.0.2", 00:33:14.375 "trsvcid": "4420", 00:33:14.375 "trtype": "TCP" 00:33:14.375 }, 00:33:14.375 "peer_address": { 00:33:14.375 "adrfam": "IPv4", 00:33:14.375 "traddr": "10.0.0.1", 00:33:14.375 "trsvcid": "54882", 00:33:14.375 "trtype": "TCP" 00:33:14.375 }, 00:33:14.375 "qid": 0, 00:33:14.375 "state": "enabled" 00:33:14.375 } 00:33:14.375 ]' 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:14.375 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:14.645 00:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:33:15.598 00:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:15.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:15.598 00:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:15.598 00:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:15.598 00:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:15.598 00:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:15.598 00:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:15.598 00:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:15.598 00:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:15.856 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:33:15.856 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:15.856 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:15.856 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:15.856 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:15.856 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:33:15.856 00:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:15.856 00:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:15.856 00:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:15.856 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:15.856 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:16.422 00:33:16.422 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:16.422 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:16.422 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:16.681 { 00:33:16.681 "auth": { 00:33:16.681 "dhgroup": "ffdhe3072", 00:33:16.681 "digest": "sha384", 00:33:16.681 "state": "completed" 00:33:16.681 }, 00:33:16.681 "cntlid": 69, 00:33:16.681 "listen_address": { 00:33:16.681 "adrfam": "IPv4", 00:33:16.681 "traddr": "10.0.0.2", 00:33:16.681 "trsvcid": "4420", 00:33:16.681 "trtype": "TCP" 00:33:16.681 }, 00:33:16.681 "peer_address": { 00:33:16.681 "adrfam": "IPv4", 00:33:16.681 "traddr": "10.0.0.1", 00:33:16.681 "trsvcid": "54902", 00:33:16.681 "trtype": "TCP" 00:33:16.681 }, 00:33:16.681 "qid": 0, 00:33:16.681 "state": "enabled" 00:33:16.681 } 00:33:16.681 ]' 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:16.681 00:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:16.939 00:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:33:17.890 00:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:17.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:17.890 00:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:17.890 00:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:17.890 00:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:17.890 00:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:17.890 00:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:17.890 00:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:17.890 00:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:18.148 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:33:18.148 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:18.148 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:18.148 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:18.148 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:18.148 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:33:18.148 00:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:18.148 00:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:18.148 00:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:18.148 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:18.148 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:18.406 00:33:18.406 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:18.406 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:18.406 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:18.665 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.665 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:18.665 00:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:18.665 00:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:18.665 00:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:18.665 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:18.665 { 00:33:18.665 "auth": { 00:33:18.665 "dhgroup": "ffdhe3072", 00:33:18.665 "digest": "sha384", 00:33:18.665 "state": "completed" 00:33:18.665 }, 00:33:18.665 "cntlid": 71, 00:33:18.665 "listen_address": { 00:33:18.665 "adrfam": "IPv4", 00:33:18.665 "traddr": "10.0.0.2", 00:33:18.665 "trsvcid": "4420", 00:33:18.665 "trtype": "TCP" 00:33:18.665 }, 00:33:18.665 "peer_address": { 00:33:18.665 "adrfam": "IPv4", 00:33:18.665 "traddr": "10.0.0.1", 00:33:18.665 "trsvcid": "42702", 00:33:18.665 "trtype": "TCP" 00:33:18.665 }, 00:33:18.665 "qid": 0, 00:33:18.666 "state": "enabled" 00:33:18.666 } 00:33:18.666 ]' 00:33:18.666 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:18.666 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:18.666 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:18.666 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:18.666 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:18.923 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:18.923 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:18.923 00:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:19.182 00:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:33:19.747 00:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:19.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:19.747 00:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:19.747 00:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:19.747 00:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:19.747 00:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:19.747 00:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:33:19.747 00:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:19.747 00:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:19.747 00:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:20.004 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:33:20.004 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:20.004 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:20.004 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:33:20.004 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:20.004 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:33:20.004 00:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:20.004 00:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:20.004 00:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:20.004 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:20.004 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:20.571 00:33:20.571 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:20.571 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:20.571 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:20.829 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.829 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:20.829 00:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:20.829 00:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:20.829 00:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:20.829 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:20.829 { 00:33:20.829 "auth": { 00:33:20.829 "dhgroup": "ffdhe4096", 00:33:20.829 "digest": "sha384", 00:33:20.829 "state": "completed" 00:33:20.829 }, 00:33:20.829 "cntlid": 73, 00:33:20.829 "listen_address": { 00:33:20.829 "adrfam": "IPv4", 00:33:20.829 "traddr": "10.0.0.2", 00:33:20.829 "trsvcid": "4420", 00:33:20.829 "trtype": "TCP" 00:33:20.829 }, 00:33:20.829 "peer_address": { 00:33:20.829 "adrfam": "IPv4", 00:33:20.829 "traddr": "10.0.0.1", 00:33:20.829 "trsvcid": "42736", 00:33:20.829 "trtype": "TCP" 00:33:20.829 }, 00:33:20.829 "qid": 0, 00:33:20.829 "state": "enabled" 00:33:20.829 } 00:33:20.829 ]' 00:33:20.829 00:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:20.829 00:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:20.829 00:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:20.829 00:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:20.829 00:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:20.829 00:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:20.829 00:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:20.829 00:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:21.393 00:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:33:21.958 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:21.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:21.958 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:21.958 00:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:21.958 00:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:21.958 00:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:21.958 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:21.958 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:21.958 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:22.215 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:33:22.215 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:22.215 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:22.215 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:33:22.215 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:22.215 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:33:22.215 00:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:22.215 00:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:22.215 00:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:22.215 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:22.215 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:22.778 00:33:22.778 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:22.778 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:22.778 00:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:23.037 { 00:33:23.037 "auth": { 00:33:23.037 "dhgroup": "ffdhe4096", 00:33:23.037 "digest": "sha384", 00:33:23.037 "state": "completed" 00:33:23.037 }, 00:33:23.037 "cntlid": 75, 00:33:23.037 "listen_address": { 00:33:23.037 "adrfam": "IPv4", 00:33:23.037 "traddr": "10.0.0.2", 00:33:23.037 "trsvcid": "4420", 00:33:23.037 "trtype": "TCP" 00:33:23.037 }, 00:33:23.037 "peer_address": { 00:33:23.037 "adrfam": "IPv4", 00:33:23.037 "traddr": "10.0.0.1", 00:33:23.037 "trsvcid": "42774", 00:33:23.037 "trtype": "TCP" 00:33:23.037 }, 00:33:23.037 "qid": 0, 00:33:23.037 "state": "enabled" 00:33:23.037 } 00:33:23.037 ]' 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:23.037 00:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:23.295 00:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:33:24.291 00:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:24.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:24.291 00:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:24.291 00:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:24.291 00:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:24.291 00:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:24.291 00:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:24.291 00:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:24.291 00:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:24.549 00:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:33:24.549 00:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:24.549 00:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:24.549 00:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:33:24.549 00:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:24.549 00:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:33:24.549 00:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:24.549 00:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:24.549 00:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:24.549 00:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:24.549 00:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:24.807 00:33:24.807 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:24.807 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:24.807 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:25.064 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.064 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:25.064 00:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:25.064 00:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:25.064 00:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:25.064 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:25.064 { 00:33:25.064 "auth": { 00:33:25.064 "dhgroup": "ffdhe4096", 00:33:25.064 "digest": "sha384", 00:33:25.064 "state": "completed" 00:33:25.064 }, 00:33:25.064 "cntlid": 77, 00:33:25.064 "listen_address": { 00:33:25.064 "adrfam": "IPv4", 00:33:25.064 "traddr": "10.0.0.2", 00:33:25.064 "trsvcid": "4420", 00:33:25.064 "trtype": "TCP" 00:33:25.064 }, 00:33:25.064 "peer_address": { 00:33:25.064 "adrfam": "IPv4", 00:33:25.064 "traddr": "10.0.0.1", 00:33:25.064 "trsvcid": "42800", 00:33:25.064 "trtype": "TCP" 00:33:25.064 }, 00:33:25.064 "qid": 0, 00:33:25.064 "state": "enabled" 00:33:25.064 } 00:33:25.064 ]' 00:33:25.064 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:25.322 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:25.322 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:25.322 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:25.322 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:25.322 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:25.322 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:25.322 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:25.580 00:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:33:26.145 00:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:26.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:26.145 00:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:26.145 00:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:26.145 00:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:26.145 00:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:26.145 00:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:26.145 00:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:26.146 00:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:26.710 00:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:33:26.710 00:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:26.710 00:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:26.710 00:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:33:26.710 00:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:26.710 00:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:33:26.710 00:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:26.710 00:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:26.710 00:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:26.710 00:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:26.710 00:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:27.029 00:33:27.029 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:27.029 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:27.029 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:27.302 { 00:33:27.302 "auth": { 00:33:27.302 "dhgroup": "ffdhe4096", 00:33:27.302 "digest": "sha384", 00:33:27.302 "state": "completed" 00:33:27.302 }, 00:33:27.302 "cntlid": 79, 00:33:27.302 "listen_address": { 00:33:27.302 "adrfam": "IPv4", 00:33:27.302 "traddr": "10.0.0.2", 00:33:27.302 "trsvcid": "4420", 00:33:27.302 "trtype": "TCP" 00:33:27.302 }, 00:33:27.302 "peer_address": { 00:33:27.302 "adrfam": "IPv4", 00:33:27.302 "traddr": "10.0.0.1", 00:33:27.302 "trsvcid": "42820", 00:33:27.302 "trtype": "TCP" 00:33:27.302 }, 00:33:27.302 "qid": 0, 00:33:27.302 "state": "enabled" 00:33:27.302 } 00:33:27.302 ]' 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:27.302 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:27.869 00:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:33:28.436 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:28.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:28.436 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:28.436 00:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.436 00:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:28.436 00:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.436 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:33:28.436 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:28.436 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:28.436 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:28.694 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:33:28.694 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:28.694 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:28.694 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:33:28.694 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:28.694 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:33:28.694 00:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.694 00:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:28.694 00:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.694 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:28.694 00:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:29.262 00:33:29.262 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:29.262 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:29.262 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:29.521 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.521 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:29.521 00:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:29.521 00:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:29.521 00:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:29.521 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:29.521 { 00:33:29.521 "auth": { 00:33:29.521 "dhgroup": "ffdhe6144", 00:33:29.521 "digest": "sha384", 00:33:29.521 "state": "completed" 00:33:29.521 }, 00:33:29.521 "cntlid": 81, 00:33:29.521 "listen_address": { 00:33:29.521 "adrfam": "IPv4", 00:33:29.521 "traddr": "10.0.0.2", 00:33:29.521 "trsvcid": "4420", 00:33:29.521 "trtype": "TCP" 00:33:29.521 }, 00:33:29.521 "peer_address": { 00:33:29.521 "adrfam": "IPv4", 00:33:29.521 "traddr": "10.0.0.1", 00:33:29.521 "trsvcid": "35734", 00:33:29.521 "trtype": "TCP" 00:33:29.521 }, 00:33:29.521 "qid": 0, 00:33:29.521 "state": "enabled" 00:33:29.521 } 00:33:29.521 ]' 00:33:29.521 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:29.521 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:29.521 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:29.521 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:29.521 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:29.779 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:29.779 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:29.779 00:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:30.037 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:33:30.601 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:30.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:30.602 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:30.602 00:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:30.602 00:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:30.602 00:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:30.602 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:30.602 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:30.602 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:30.859 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:33:30.859 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:30.859 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:30.859 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:33:30.859 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:30.859 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:33:30.859 00:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:30.859 00:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:30.859 00:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:30.859 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:30.859 00:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:31.425 00:33:31.425 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:31.425 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:31.425 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:31.684 { 00:33:31.684 "auth": { 00:33:31.684 "dhgroup": "ffdhe6144", 00:33:31.684 "digest": "sha384", 00:33:31.684 "state": "completed" 00:33:31.684 }, 00:33:31.684 "cntlid": 83, 00:33:31.684 "listen_address": { 00:33:31.684 "adrfam": "IPv4", 00:33:31.684 "traddr": "10.0.0.2", 00:33:31.684 "trsvcid": "4420", 00:33:31.684 "trtype": "TCP" 00:33:31.684 }, 00:33:31.684 "peer_address": { 00:33:31.684 "adrfam": "IPv4", 00:33:31.684 "traddr": "10.0.0.1", 00:33:31.684 "trsvcid": "35758", 00:33:31.684 "trtype": "TCP" 00:33:31.684 }, 00:33:31.684 "qid": 0, 00:33:31.684 "state": "enabled" 00:33:31.684 } 00:33:31.684 ]' 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:31.684 00:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:31.941 00:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:33:32.875 00:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:32.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:32.875 00:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:32.875 00:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:32.875 00:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:32.875 00:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:32.875 00:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:32.875 00:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:32.875 00:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:32.875 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:33:32.875 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:32.875 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:32.875 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:33:32.875 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:32.875 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:33:32.875 00:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:32.875 00:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:32.875 00:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:32.875 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:32.875 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:33.441 00:33:33.441 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:33.441 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:33.441 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:33.699 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.699 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:33.699 00:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:33.699 00:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:33.699 00:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:33.699 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:33.699 { 00:33:33.699 "auth": { 00:33:33.699 "dhgroup": "ffdhe6144", 00:33:33.699 "digest": "sha384", 00:33:33.699 "state": "completed" 00:33:33.699 }, 00:33:33.699 "cntlid": 85, 00:33:33.699 "listen_address": { 00:33:33.699 "adrfam": "IPv4", 00:33:33.699 "traddr": "10.0.0.2", 00:33:33.699 "trsvcid": "4420", 00:33:33.699 "trtype": "TCP" 00:33:33.699 }, 00:33:33.699 "peer_address": { 00:33:33.699 "adrfam": "IPv4", 00:33:33.699 "traddr": "10.0.0.1", 00:33:33.699 "trsvcid": "35780", 00:33:33.699 "trtype": "TCP" 00:33:33.699 }, 00:33:33.699 "qid": 0, 00:33:33.699 "state": "enabled" 00:33:33.699 } 00:33:33.699 ]' 00:33:33.699 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:33.699 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:33.699 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:33.699 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:33.699 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:33.957 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:33.957 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:33.957 00:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:34.215 00:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:33:34.781 00:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:34.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:34.781 00:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:34.781 00:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:34.781 00:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:34.781 00:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:34.781 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:34.781 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:34.781 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:35.039 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:33:35.039 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:35.039 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:35.039 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:33:35.039 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:35.039 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:33:35.039 00:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:35.039 00:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.039 00:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:35.039 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:35.039 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:35.604 00:33:35.604 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:35.604 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:35.604 00:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:35.862 00:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.862 00:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:35.862 00:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:35.862 00:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.862 00:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:35.862 00:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:35.862 { 00:33:35.862 "auth": { 00:33:35.862 "dhgroup": "ffdhe6144", 00:33:35.862 "digest": "sha384", 00:33:35.862 "state": "completed" 00:33:35.862 }, 00:33:35.862 "cntlid": 87, 00:33:35.862 "listen_address": { 00:33:35.862 "adrfam": "IPv4", 00:33:35.862 "traddr": "10.0.0.2", 00:33:35.862 "trsvcid": "4420", 00:33:35.862 "trtype": "TCP" 00:33:35.862 }, 00:33:35.862 "peer_address": { 00:33:35.862 "adrfam": "IPv4", 00:33:35.862 "traddr": "10.0.0.1", 00:33:35.862 "trsvcid": "35810", 00:33:35.862 "trtype": "TCP" 00:33:35.862 }, 00:33:35.862 "qid": 0, 00:33:35.862 "state": "enabled" 00:33:35.862 } 00:33:35.862 ]' 00:33:35.862 00:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:35.862 00:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:35.862 00:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:35.862 00:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:35.862 00:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:36.120 00:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:36.120 00:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:36.120 00:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:36.377 00:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:33:36.942 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:36.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:36.942 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:36.942 00:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:36.942 00:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:36.942 00:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:36.942 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:33:36.942 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:36.942 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:36.942 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:37.201 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:33:37.201 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:37.201 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:37.201 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:37.201 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:37.201 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:33:37.201 00:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:37.201 00:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:37.201 00:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:37.201 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:37.201 00:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:38.133 00:33:38.133 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:38.133 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:38.133 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:38.133 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.133 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:38.133 00:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.133 00:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:38.133 00:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.133 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:38.133 { 00:33:38.133 "auth": { 00:33:38.133 "dhgroup": "ffdhe8192", 00:33:38.133 "digest": "sha384", 00:33:38.133 "state": "completed" 00:33:38.133 }, 00:33:38.133 "cntlid": 89, 00:33:38.133 "listen_address": { 00:33:38.133 "adrfam": "IPv4", 00:33:38.133 "traddr": "10.0.0.2", 00:33:38.133 "trsvcid": "4420", 00:33:38.133 "trtype": "TCP" 00:33:38.133 }, 00:33:38.133 "peer_address": { 00:33:38.133 "adrfam": "IPv4", 00:33:38.133 "traddr": "10.0.0.1", 00:33:38.134 "trsvcid": "38170", 00:33:38.134 "trtype": "TCP" 00:33:38.134 }, 00:33:38.134 "qid": 0, 00:33:38.134 "state": "enabled" 00:33:38.134 } 00:33:38.134 ]' 00:33:38.134 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:38.392 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:38.392 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:38.392 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:38.392 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:38.392 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:38.392 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:38.392 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:38.650 00:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:33:39.215 00:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:39.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:39.215 00:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:39.215 00:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.215 00:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:39.215 00:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.215 00:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:39.215 00:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:39.215 00:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:39.781 00:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:33:39.781 00:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:39.781 00:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:39.781 00:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:39.781 00:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:39.781 00:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:33:39.781 00:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.782 00:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:39.782 00:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.782 00:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:39.782 00:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:40.353 00:33:40.353 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:40.353 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:40.353 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:40.615 { 00:33:40.615 "auth": { 00:33:40.615 "dhgroup": "ffdhe8192", 00:33:40.615 "digest": "sha384", 00:33:40.615 "state": "completed" 00:33:40.615 }, 00:33:40.615 "cntlid": 91, 00:33:40.615 "listen_address": { 00:33:40.615 "adrfam": "IPv4", 00:33:40.615 "traddr": "10.0.0.2", 00:33:40.615 "trsvcid": "4420", 00:33:40.615 "trtype": "TCP" 00:33:40.615 }, 00:33:40.615 "peer_address": { 00:33:40.615 "adrfam": "IPv4", 00:33:40.615 "traddr": "10.0.0.1", 00:33:40.615 "trsvcid": "38200", 00:33:40.615 "trtype": "TCP" 00:33:40.615 }, 00:33:40.615 "qid": 0, 00:33:40.615 "state": "enabled" 00:33:40.615 } 00:33:40.615 ]' 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:40.615 00:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:40.873 00:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:33:41.807 00:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:41.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:41.807 00:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:41.807 00:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.807 00:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:41.807 00:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.807 00:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:41.807 00:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:41.807 00:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:41.807 00:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:33:41.807 00:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:41.807 00:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:41.807 00:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:41.807 00:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:41.807 00:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:33:41.807 00:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.807 00:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:41.807 00:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.808 00:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:41.808 00:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:42.748 00:33:42.748 00:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:42.748 00:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:42.748 00:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:42.748 00:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.748 00:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:42.748 00:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.749 00:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:42.749 00:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.749 00:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:42.749 { 00:33:42.749 "auth": { 00:33:42.749 "dhgroup": "ffdhe8192", 00:33:42.749 "digest": "sha384", 00:33:42.749 "state": "completed" 00:33:42.749 }, 00:33:42.749 "cntlid": 93, 00:33:42.749 "listen_address": { 00:33:42.749 "adrfam": "IPv4", 00:33:42.749 "traddr": "10.0.0.2", 00:33:42.749 "trsvcid": "4420", 00:33:42.749 "trtype": "TCP" 00:33:42.749 }, 00:33:42.749 "peer_address": { 00:33:42.749 "adrfam": "IPv4", 00:33:42.749 "traddr": "10.0.0.1", 00:33:42.749 "trsvcid": "38228", 00:33:42.749 "trtype": "TCP" 00:33:42.749 }, 00:33:42.749 "qid": 0, 00:33:42.749 "state": "enabled" 00:33:42.749 } 00:33:42.749 ]' 00:33:42.749 00:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:43.007 00:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:43.007 00:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:43.007 00:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:43.007 00:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:43.007 00:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:43.007 00:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:43.007 00:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:43.266 00:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:33:44.199 00:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:44.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:44.199 00:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:44.199 00:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:44.200 00:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.200 00:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:44.200 00:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:44.200 00:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:44.200 00:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:44.200 00:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:33:44.200 00:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:44.200 00:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:44.200 00:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:44.200 00:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:44.200 00:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:33:44.200 00:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:44.200 00:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.458 00:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:44.458 00:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:44.458 00:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:45.024 00:33:45.024 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:45.024 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:45.024 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:45.282 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.282 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:45.282 00:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:45.282 00:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:45.282 00:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:45.282 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:45.282 { 00:33:45.282 "auth": { 00:33:45.282 "dhgroup": "ffdhe8192", 00:33:45.282 "digest": "sha384", 00:33:45.282 "state": "completed" 00:33:45.282 }, 00:33:45.282 "cntlid": 95, 00:33:45.282 "listen_address": { 00:33:45.282 "adrfam": "IPv4", 00:33:45.282 "traddr": "10.0.0.2", 00:33:45.282 "trsvcid": "4420", 00:33:45.282 "trtype": "TCP" 00:33:45.282 }, 00:33:45.282 "peer_address": { 00:33:45.282 "adrfam": "IPv4", 00:33:45.282 "traddr": "10.0.0.1", 00:33:45.282 "trsvcid": "38252", 00:33:45.282 "trtype": "TCP" 00:33:45.282 }, 00:33:45.282 "qid": 0, 00:33:45.282 "state": "enabled" 00:33:45.282 } 00:33:45.282 ]' 00:33:45.282 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:45.283 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:45.283 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:45.541 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:45.541 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:45.541 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:45.541 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:45.541 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:45.799 00:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:33:46.366 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:46.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:46.366 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:46.366 00:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:46.366 00:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:46.366 00:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:46.366 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:33:46.367 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:33:46.367 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:46.367 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:46.367 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:46.625 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:33:46.625 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:46.625 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:46.625 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:33:46.625 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:46.625 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:33:46.625 00:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:46.625 00:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:46.625 00:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:46.625 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:46.625 00:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:47.192 00:33:47.192 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:47.192 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:47.192 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:47.192 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.192 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:47.192 00:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:47.192 00:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.450 00:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:47.450 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:47.450 { 00:33:47.450 "auth": { 00:33:47.450 "dhgroup": "null", 00:33:47.450 "digest": "sha512", 00:33:47.450 "state": "completed" 00:33:47.450 }, 00:33:47.450 "cntlid": 97, 00:33:47.450 "listen_address": { 00:33:47.450 "adrfam": "IPv4", 00:33:47.450 "traddr": "10.0.0.2", 00:33:47.450 "trsvcid": "4420", 00:33:47.450 "trtype": "TCP" 00:33:47.450 }, 00:33:47.450 "peer_address": { 00:33:47.450 "adrfam": "IPv4", 00:33:47.450 "traddr": "10.0.0.1", 00:33:47.450 "trsvcid": "38288", 00:33:47.451 "trtype": "TCP" 00:33:47.451 }, 00:33:47.451 "qid": 0, 00:33:47.451 "state": "enabled" 00:33:47.451 } 00:33:47.451 ]' 00:33:47.451 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:47.451 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:47.451 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:47.451 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:33:47.451 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:47.451 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:47.451 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:47.451 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:47.709 00:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:33:48.645 00:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:48.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:48.645 00:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:48.645 00:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:48.645 00:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:48.645 00:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:48.645 00:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:48.645 00:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:48.645 00:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:48.925 00:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:33:48.925 00:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:48.925 00:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:48.925 00:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:33:48.925 00:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:48.925 00:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:33:48.925 00:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:48.925 00:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:48.925 00:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:48.925 00:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:48.925 00:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:49.184 00:33:49.184 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:49.184 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:49.184 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:49.443 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.443 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:49.443 00:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.443 00:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:49.443 00:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.443 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:49.443 { 00:33:49.443 "auth": { 00:33:49.443 "dhgroup": "null", 00:33:49.443 "digest": "sha512", 00:33:49.443 "state": "completed" 00:33:49.443 }, 00:33:49.443 "cntlid": 99, 00:33:49.443 "listen_address": { 00:33:49.443 "adrfam": "IPv4", 00:33:49.443 "traddr": "10.0.0.2", 00:33:49.443 "trsvcid": "4420", 00:33:49.443 "trtype": "TCP" 00:33:49.443 }, 00:33:49.443 "peer_address": { 00:33:49.443 "adrfam": "IPv4", 00:33:49.443 "traddr": "10.0.0.1", 00:33:49.443 "trsvcid": "55984", 00:33:49.443 "trtype": "TCP" 00:33:49.443 }, 00:33:49.443 "qid": 0, 00:33:49.443 "state": "enabled" 00:33:49.443 } 00:33:49.443 ]' 00:33:49.443 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:49.443 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:49.443 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:49.701 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:33:49.701 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:49.701 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:49.701 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:49.701 00:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:49.960 00:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:33:50.894 00:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:50.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:50.894 00:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:50.894 00:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.894 00:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:50.894 00:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.894 00:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:50.894 00:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:50.894 00:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:51.151 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:33:51.151 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:51.151 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:51.151 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:33:51.151 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:51.151 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:33:51.151 00:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.151 00:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:51.151 00:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.151 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:51.151 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:51.411 00:33:51.411 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:51.411 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:51.411 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:51.670 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.670 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:51.670 00:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.670 00:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:51.670 00:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.670 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:51.670 { 00:33:51.670 "auth": { 00:33:51.670 "dhgroup": "null", 00:33:51.670 "digest": "sha512", 00:33:51.670 "state": "completed" 00:33:51.670 }, 00:33:51.670 "cntlid": 101, 00:33:51.670 "listen_address": { 00:33:51.670 "adrfam": "IPv4", 00:33:51.670 "traddr": "10.0.0.2", 00:33:51.670 "trsvcid": "4420", 00:33:51.670 "trtype": "TCP" 00:33:51.670 }, 00:33:51.670 "peer_address": { 00:33:51.670 "adrfam": "IPv4", 00:33:51.670 "traddr": "10.0.0.1", 00:33:51.670 "trsvcid": "56024", 00:33:51.670 "trtype": "TCP" 00:33:51.670 }, 00:33:51.670 "qid": 0, 00:33:51.670 "state": "enabled" 00:33:51.670 } 00:33:51.670 ]' 00:33:51.670 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:51.670 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:51.670 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:51.953 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:33:51.953 00:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:51.953 00:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:51.953 00:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:51.953 00:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:52.210 00:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:33:52.776 00:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:52.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:52.776 00:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:52.776 00:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.776 00:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:52.776 00:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.776 00:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:52.776 00:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:52.776 00:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:53.041 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:33:53.041 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:53.041 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:53.041 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:33:53.041 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:53.041 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:33:53.041 00:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.041 00:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:53.041 00:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.041 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:53.041 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:53.605 00:33:53.605 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:53.605 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:53.605 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:53.862 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.862 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:53.862 00:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.862 00:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:53.862 00:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.862 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:53.862 { 00:33:53.862 "auth": { 00:33:53.862 "dhgroup": "null", 00:33:53.862 "digest": "sha512", 00:33:53.862 "state": "completed" 00:33:53.862 }, 00:33:53.862 "cntlid": 103, 00:33:53.862 "listen_address": { 00:33:53.862 "adrfam": "IPv4", 00:33:53.862 "traddr": "10.0.0.2", 00:33:53.862 "trsvcid": "4420", 00:33:53.862 "trtype": "TCP" 00:33:53.862 }, 00:33:53.862 "peer_address": { 00:33:53.862 "adrfam": "IPv4", 00:33:53.862 "traddr": "10.0.0.1", 00:33:53.862 "trsvcid": "56048", 00:33:53.862 "trtype": "TCP" 00:33:53.862 }, 00:33:53.862 "qid": 0, 00:33:53.862 "state": "enabled" 00:33:53.862 } 00:33:53.862 ]' 00:33:53.862 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:53.862 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:53.862 00:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:53.862 00:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:33:53.862 00:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:53.862 00:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:53.862 00:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:53.862 00:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:54.119 00:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:55.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:55.054 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:33:55.621 00:33:55.621 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:55.621 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:55.622 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:55.622 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.622 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:55.622 00:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:55.622 00:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.880 00:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:55.880 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:55.880 { 00:33:55.880 "auth": { 00:33:55.880 "dhgroup": "ffdhe2048", 00:33:55.880 "digest": "sha512", 00:33:55.880 "state": "completed" 00:33:55.880 }, 00:33:55.880 "cntlid": 105, 00:33:55.880 "listen_address": { 00:33:55.880 "adrfam": "IPv4", 00:33:55.880 "traddr": "10.0.0.2", 00:33:55.880 "trsvcid": "4420", 00:33:55.880 "trtype": "TCP" 00:33:55.880 }, 00:33:55.880 "peer_address": { 00:33:55.880 "adrfam": "IPv4", 00:33:55.880 "traddr": "10.0.0.1", 00:33:55.880 "trsvcid": "56074", 00:33:55.880 "trtype": "TCP" 00:33:55.880 }, 00:33:55.880 "qid": 0, 00:33:55.880 "state": "enabled" 00:33:55.880 } 00:33:55.880 ]' 00:33:55.880 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:55.880 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:55.880 00:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:55.880 00:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:55.880 00:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:55.880 00:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:55.880 00:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:55.880 00:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:56.137 00:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:33:57.071 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:57.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:57.071 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:57.071 01:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:57.071 01:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:57.071 01:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:57.071 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:57.071 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:57.071 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:57.329 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:33:57.329 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:57.329 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:57.329 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:57.329 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:57.329 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:33:57.329 01:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:57.330 01:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:57.330 01:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:57.330 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:57.330 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:33:57.588 00:33:57.588 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:57.588 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:57.588 01:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:57.846 01:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.846 01:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:57.846 01:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:57.846 01:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:57.846 01:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:57.846 01:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:33:57.846 { 00:33:57.846 "auth": { 00:33:57.846 "dhgroup": "ffdhe2048", 00:33:57.846 "digest": "sha512", 00:33:57.846 "state": "completed" 00:33:57.846 }, 00:33:57.846 "cntlid": 107, 00:33:57.846 "listen_address": { 00:33:57.846 "adrfam": "IPv4", 00:33:57.846 "traddr": "10.0.0.2", 00:33:57.846 "trsvcid": "4420", 00:33:57.846 "trtype": "TCP" 00:33:57.846 }, 00:33:57.846 "peer_address": { 00:33:57.846 "adrfam": "IPv4", 00:33:57.846 "traddr": "10.0.0.1", 00:33:57.846 "trsvcid": "56244", 00:33:57.846 "trtype": "TCP" 00:33:57.846 }, 00:33:57.846 "qid": 0, 00:33:57.846 "state": "enabled" 00:33:57.846 } 00:33:57.846 ]' 00:33:57.846 01:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:33:58.104 01:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:58.104 01:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:33:58.104 01:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:58.104 01:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:33:58.104 01:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:58.104 01:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:58.104 01:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:58.362 01:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:59.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:59.291 01:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.549 01:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:59.549 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:59.549 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:33:59.807 00:33:59.807 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:33:59.807 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:33:59.807 01:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:00.065 01:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.066 01:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:00.066 01:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.066 01:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.066 01:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.066 01:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:00.066 { 00:34:00.066 "auth": { 00:34:00.066 "dhgroup": "ffdhe2048", 00:34:00.066 "digest": "sha512", 00:34:00.066 "state": "completed" 00:34:00.066 }, 00:34:00.066 "cntlid": 109, 00:34:00.066 "listen_address": { 00:34:00.066 "adrfam": "IPv4", 00:34:00.066 "traddr": "10.0.0.2", 00:34:00.066 "trsvcid": "4420", 00:34:00.066 "trtype": "TCP" 00:34:00.066 }, 00:34:00.066 "peer_address": { 00:34:00.066 "adrfam": "IPv4", 00:34:00.066 "traddr": "10.0.0.1", 00:34:00.066 "trsvcid": "56258", 00:34:00.066 "trtype": "TCP" 00:34:00.066 }, 00:34:00.066 "qid": 0, 00:34:00.066 "state": "enabled" 00:34:00.066 } 00:34:00.066 ]' 00:34:00.066 01:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:00.066 01:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:00.066 01:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:00.332 01:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:34:00.332 01:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:00.332 01:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:00.332 01:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:00.332 01:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:00.594 01:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:34:01.160 01:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:01.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:01.160 01:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:01.160 01:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.160 01:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:01.160 01:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.160 01:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:01.160 01:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:01.160 01:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:01.419 01:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:34:01.419 01:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:01.419 01:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:01.419 01:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:34:01.419 01:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:34:01.419 01:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:34:01.419 01:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:01.419 01:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:01.419 01:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:01.419 01:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:01.419 01:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:01.985 00:34:01.985 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:01.985 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:01.985 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:02.242 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.242 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:02.242 01:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.242 01:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:02.242 01:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.242 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:02.242 { 00:34:02.242 "auth": { 00:34:02.242 "dhgroup": "ffdhe2048", 00:34:02.242 "digest": "sha512", 00:34:02.242 "state": "completed" 00:34:02.242 }, 00:34:02.242 "cntlid": 111, 00:34:02.242 "listen_address": { 00:34:02.242 "adrfam": "IPv4", 00:34:02.242 "traddr": "10.0.0.2", 00:34:02.242 "trsvcid": "4420", 00:34:02.242 "trtype": "TCP" 00:34:02.242 }, 00:34:02.242 "peer_address": { 00:34:02.242 "adrfam": "IPv4", 00:34:02.242 "traddr": "10.0.0.1", 00:34:02.242 "trsvcid": "56284", 00:34:02.242 "trtype": "TCP" 00:34:02.242 }, 00:34:02.242 "qid": 0, 00:34:02.242 "state": "enabled" 00:34:02.243 } 00:34:02.243 ]' 00:34:02.243 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:02.243 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:02.243 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:02.243 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:34:02.243 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:02.243 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:02.243 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:02.243 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:02.500 01:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:03.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:03.452 01:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:04.017 00:34:04.017 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:04.017 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:04.017 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:04.276 { 00:34:04.276 "auth": { 00:34:04.276 "dhgroup": "ffdhe3072", 00:34:04.276 "digest": "sha512", 00:34:04.276 "state": "completed" 00:34:04.276 }, 00:34:04.276 "cntlid": 113, 00:34:04.276 "listen_address": { 00:34:04.276 "adrfam": "IPv4", 00:34:04.276 "traddr": "10.0.0.2", 00:34:04.276 "trsvcid": "4420", 00:34:04.276 "trtype": "TCP" 00:34:04.276 }, 00:34:04.276 "peer_address": { 00:34:04.276 "adrfam": "IPv4", 00:34:04.276 "traddr": "10.0.0.1", 00:34:04.276 "trsvcid": "56314", 00:34:04.276 "trtype": "TCP" 00:34:04.276 }, 00:34:04.276 "qid": 0, 00:34:04.276 "state": "enabled" 00:34:04.276 } 00:34:04.276 ]' 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:04.276 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:04.533 01:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:34:05.468 01:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:05.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:05.468 01:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:05.468 01:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.468 01:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:05.468 01:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.468 01:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:05.468 01:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:05.468 01:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:05.726 01:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:34:05.726 01:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:05.726 01:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:05.726 01:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:34:05.726 01:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:34:05.726 01:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:34:05.726 01:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.726 01:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:05.726 01:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.726 01:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:34:05.726 01:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:34:05.985 00:34:05.985 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:05.985 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:05.985 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:06.243 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.243 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:06.243 01:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.243 01:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:06.243 01:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.243 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:06.243 { 00:34:06.243 "auth": { 00:34:06.243 "dhgroup": "ffdhe3072", 00:34:06.243 "digest": "sha512", 00:34:06.243 "state": "completed" 00:34:06.243 }, 00:34:06.243 "cntlid": 115, 00:34:06.243 "listen_address": { 00:34:06.243 "adrfam": "IPv4", 00:34:06.243 "traddr": "10.0.0.2", 00:34:06.243 "trsvcid": "4420", 00:34:06.243 "trtype": "TCP" 00:34:06.243 }, 00:34:06.243 "peer_address": { 00:34:06.243 "adrfam": "IPv4", 00:34:06.243 "traddr": "10.0.0.1", 00:34:06.243 "trsvcid": "56330", 00:34:06.243 "trtype": "TCP" 00:34:06.243 }, 00:34:06.243 "qid": 0, 00:34:06.243 "state": "enabled" 00:34:06.243 } 00:34:06.243 ]' 00:34:06.243 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:06.243 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:06.243 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:06.243 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:34:06.243 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:06.500 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:06.500 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:06.500 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:06.759 01:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:34:07.325 01:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:07.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:07.325 01:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:07.325 01:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.325 01:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:07.325 01:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.325 01:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:07.325 01:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:07.325 01:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:07.584 01:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:34:07.584 01:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:07.584 01:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:07.584 01:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:34:07.584 01:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:34:07.584 01:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:34:07.584 01:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.584 01:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:07.584 01:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.584 01:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:07.584 01:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:08.150 00:34:08.150 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:08.150 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:08.150 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:08.408 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.408 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:08.408 01:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.408 01:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:08.408 01:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.408 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:08.408 { 00:34:08.408 "auth": { 00:34:08.408 "dhgroup": "ffdhe3072", 00:34:08.408 "digest": "sha512", 00:34:08.408 "state": "completed" 00:34:08.408 }, 00:34:08.408 "cntlid": 117, 00:34:08.408 "listen_address": { 00:34:08.408 "adrfam": "IPv4", 00:34:08.408 "traddr": "10.0.0.2", 00:34:08.408 "trsvcid": "4420", 00:34:08.408 "trtype": "TCP" 00:34:08.408 }, 00:34:08.408 "peer_address": { 00:34:08.408 "adrfam": "IPv4", 00:34:08.408 "traddr": "10.0.0.1", 00:34:08.408 "trsvcid": "34826", 00:34:08.408 "trtype": "TCP" 00:34:08.408 }, 00:34:08.408 "qid": 0, 00:34:08.408 "state": "enabled" 00:34:08.408 } 00:34:08.408 ]' 00:34:08.408 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:08.408 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:08.408 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:08.408 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:34:08.408 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:08.665 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:08.665 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:08.665 01:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:08.923 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:34:09.489 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:09.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:09.489 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:09.489 01:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.489 01:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.489 01:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.489 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:09.489 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:09.489 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:09.747 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:34:09.747 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:09.747 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:09.747 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:34:09.747 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:34:09.747 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:34:09.747 01:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.747 01:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.747 01:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.747 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:09.747 01:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:10.007 00:34:10.312 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:10.312 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:10.312 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:10.592 { 00:34:10.592 "auth": { 00:34:10.592 "dhgroup": "ffdhe3072", 00:34:10.592 "digest": "sha512", 00:34:10.592 "state": "completed" 00:34:10.592 }, 00:34:10.592 "cntlid": 119, 00:34:10.592 "listen_address": { 00:34:10.592 "adrfam": "IPv4", 00:34:10.592 "traddr": "10.0.0.2", 00:34:10.592 "trsvcid": "4420", 00:34:10.592 "trtype": "TCP" 00:34:10.592 }, 00:34:10.592 "peer_address": { 00:34:10.592 "adrfam": "IPv4", 00:34:10.592 "traddr": "10.0.0.1", 00:34:10.592 "trsvcid": "34856", 00:34:10.592 "trtype": "TCP" 00:34:10.592 }, 00:34:10.592 "qid": 0, 00:34:10.592 "state": "enabled" 00:34:10.592 } 00:34:10.592 ]' 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:10.592 01:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:10.850 01:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:34:11.785 01:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:11.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:11.785 01:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:11.785 01:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.785 01:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:11.785 01:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.785 01:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:34:11.785 01:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:11.785 01:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:11.785 01:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:12.044 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:34:12.044 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:12.044 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:12.044 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:34:12.044 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:34:12.044 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:34:12.044 01:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.044 01:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:12.044 01:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.044 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:12.044 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:12.609 00:34:12.609 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:12.609 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:12.609 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:12.867 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.867 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:12.867 01:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.867 01:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:12.867 01:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.867 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:12.867 { 00:34:12.867 "auth": { 00:34:12.867 "dhgroup": "ffdhe4096", 00:34:12.867 "digest": "sha512", 00:34:12.867 "state": "completed" 00:34:12.867 }, 00:34:12.867 "cntlid": 121, 00:34:12.867 "listen_address": { 00:34:12.867 "adrfam": "IPv4", 00:34:12.867 "traddr": "10.0.0.2", 00:34:12.867 "trsvcid": "4420", 00:34:12.867 "trtype": "TCP" 00:34:12.867 }, 00:34:12.867 "peer_address": { 00:34:12.867 "adrfam": "IPv4", 00:34:12.867 "traddr": "10.0.0.1", 00:34:12.867 "trsvcid": "34876", 00:34:12.867 "trtype": "TCP" 00:34:12.867 }, 00:34:12.867 "qid": 0, 00:34:12.867 "state": "enabled" 00:34:12.867 } 00:34:12.867 ]' 00:34:12.867 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:12.867 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:12.867 01:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:12.867 01:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:34:12.867 01:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:12.867 01:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:12.867 01:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:12.867 01:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:13.125 01:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:34:14.060 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:14.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:14.060 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:14.061 01:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.061 01:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:14.061 01:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.061 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:14.061 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:14.061 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:14.320 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:34:14.320 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:14.320 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:14.320 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:34:14.320 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:34:14.320 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:34:14.320 01:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.320 01:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:14.320 01:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.320 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:34:14.320 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:34:14.579 00:34:14.579 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:14.579 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:14.579 01:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:14.838 01:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.838 01:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:14.838 01:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.838 01:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:15.096 01:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.096 01:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:15.096 { 00:34:15.096 "auth": { 00:34:15.096 "dhgroup": "ffdhe4096", 00:34:15.096 "digest": "sha512", 00:34:15.096 "state": "completed" 00:34:15.096 }, 00:34:15.096 "cntlid": 123, 00:34:15.096 "listen_address": { 00:34:15.096 "adrfam": "IPv4", 00:34:15.096 "traddr": "10.0.0.2", 00:34:15.096 "trsvcid": "4420", 00:34:15.096 "trtype": "TCP" 00:34:15.096 }, 00:34:15.096 "peer_address": { 00:34:15.096 "adrfam": "IPv4", 00:34:15.096 "traddr": "10.0.0.1", 00:34:15.096 "trsvcid": "34890", 00:34:15.096 "trtype": "TCP" 00:34:15.096 }, 00:34:15.096 "qid": 0, 00:34:15.096 "state": "enabled" 00:34:15.096 } 00:34:15.096 ]' 00:34:15.096 01:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:15.096 01:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:15.096 01:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:15.096 01:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:34:15.096 01:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:15.096 01:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:15.096 01:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:15.096 01:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:15.354 01:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:16.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:16.290 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:16.857 00:34:16.857 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:16.857 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:16.857 01:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:17.116 01:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.116 01:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:17.116 01:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.116 01:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:17.116 01:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.116 01:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:17.116 { 00:34:17.116 "auth": { 00:34:17.116 "dhgroup": "ffdhe4096", 00:34:17.116 "digest": "sha512", 00:34:17.116 "state": "completed" 00:34:17.116 }, 00:34:17.116 "cntlid": 125, 00:34:17.116 "listen_address": { 00:34:17.116 "adrfam": "IPv4", 00:34:17.116 "traddr": "10.0.0.2", 00:34:17.116 "trsvcid": "4420", 00:34:17.116 "trtype": "TCP" 00:34:17.116 }, 00:34:17.116 "peer_address": { 00:34:17.116 "adrfam": "IPv4", 00:34:17.116 "traddr": "10.0.0.1", 00:34:17.116 "trsvcid": "34916", 00:34:17.116 "trtype": "TCP" 00:34:17.116 }, 00:34:17.116 "qid": 0, 00:34:17.116 "state": "enabled" 00:34:17.116 } 00:34:17.116 ]' 00:34:17.116 01:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:17.116 01:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:17.116 01:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:17.374 01:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:34:17.374 01:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:17.374 01:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:17.374 01:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:17.374 01:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:17.641 01:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:34:18.211 01:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:18.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:18.211 01:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:18.211 01:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.211 01:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:18.211 01:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.211 01:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:18.211 01:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:18.211 01:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:18.794 01:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:34:18.794 01:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:18.794 01:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:18.794 01:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:34:18.794 01:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:34:18.794 01:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:34:18.794 01:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.794 01:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:18.794 01:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.794 01:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:18.794 01:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:19.071 00:34:19.071 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:19.071 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:19.071 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:19.340 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.340 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:19.340 01:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.340 01:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.340 01:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.340 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:19.340 { 00:34:19.340 "auth": { 00:34:19.340 "dhgroup": "ffdhe4096", 00:34:19.340 "digest": "sha512", 00:34:19.340 "state": "completed" 00:34:19.340 }, 00:34:19.340 "cntlid": 127, 00:34:19.340 "listen_address": { 00:34:19.340 "adrfam": "IPv4", 00:34:19.340 "traddr": "10.0.0.2", 00:34:19.340 "trsvcid": "4420", 00:34:19.340 "trtype": "TCP" 00:34:19.340 }, 00:34:19.340 "peer_address": { 00:34:19.340 "adrfam": "IPv4", 00:34:19.340 "traddr": "10.0.0.1", 00:34:19.340 "trsvcid": "35084", 00:34:19.340 "trtype": "TCP" 00:34:19.340 }, 00:34:19.340 "qid": 0, 00:34:19.340 "state": "enabled" 00:34:19.340 } 00:34:19.340 ]' 00:34:19.340 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:19.340 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:19.340 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:19.340 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:34:19.340 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:19.604 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:19.604 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:19.605 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:19.870 01:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:34:20.435 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:20.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:20.435 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:20.435 01:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.435 01:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.435 01:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.435 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.435 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:20.436 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:20.436 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:20.694 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:34:20.694 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:20.694 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:20.694 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:34:20.694 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:34:20.694 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:34:20.694 01:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.694 01:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.694 01:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.694 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:20.694 01:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:21.262 00:34:21.262 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:21.262 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:21.262 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:21.536 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.537 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:21.537 01:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.537 01:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:21.537 01:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.537 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:21.537 { 00:34:21.537 "auth": { 00:34:21.537 "dhgroup": "ffdhe6144", 00:34:21.537 "digest": "sha512", 00:34:21.537 "state": "completed" 00:34:21.537 }, 00:34:21.537 "cntlid": 129, 00:34:21.537 "listen_address": { 00:34:21.537 "adrfam": "IPv4", 00:34:21.537 "traddr": "10.0.0.2", 00:34:21.537 "trsvcid": "4420", 00:34:21.537 "trtype": "TCP" 00:34:21.537 }, 00:34:21.537 "peer_address": { 00:34:21.537 "adrfam": "IPv4", 00:34:21.537 "traddr": "10.0.0.1", 00:34:21.537 "trsvcid": "35102", 00:34:21.537 "trtype": "TCP" 00:34:21.537 }, 00:34:21.537 "qid": 0, 00:34:21.537 "state": "enabled" 00:34:21.537 } 00:34:21.537 ]' 00:34:21.537 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:21.537 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:21.537 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:21.537 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:21.537 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:21.537 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:21.537 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:21.537 01:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:21.799 01:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:34:22.735 01:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:22.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:22.735 01:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:22.735 01:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.735 01:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:22.735 01:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.735 01:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:22.735 01:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:22.735 01:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:22.994 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:34:22.994 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:22.994 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:22.994 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:34:22.994 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:34:22.994 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:34:22.994 01:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.994 01:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:22.994 01:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.994 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:34:22.994 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:34:23.331 00:34:23.331 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:23.331 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:23.331 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:23.590 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.590 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:23.590 01:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.590 01:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:23.590 01:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.590 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:23.590 { 00:34:23.590 "auth": { 00:34:23.590 "dhgroup": "ffdhe6144", 00:34:23.590 "digest": "sha512", 00:34:23.590 "state": "completed" 00:34:23.590 }, 00:34:23.590 "cntlid": 131, 00:34:23.590 "listen_address": { 00:34:23.590 "adrfam": "IPv4", 00:34:23.590 "traddr": "10.0.0.2", 00:34:23.590 "trsvcid": "4420", 00:34:23.590 "trtype": "TCP" 00:34:23.590 }, 00:34:23.590 "peer_address": { 00:34:23.590 "adrfam": "IPv4", 00:34:23.590 "traddr": "10.0.0.1", 00:34:23.590 "trsvcid": "35124", 00:34:23.590 "trtype": "TCP" 00:34:23.590 }, 00:34:23.590 "qid": 0, 00:34:23.590 "state": "enabled" 00:34:23.590 } 00:34:23.590 ]' 00:34:23.590 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:23.590 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:23.590 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:23.849 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:23.849 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:23.849 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:23.849 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:23.849 01:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:24.109 01:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:34:24.677 01:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:24.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:24.677 01:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:24.677 01:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.677 01:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:24.677 01:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.677 01:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:24.677 01:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:24.677 01:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:24.936 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:34:24.936 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:24.936 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:24.936 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:34:24.936 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:34:24.936 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:34:24.936 01:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.936 01:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:24.936 01:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.936 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:24.936 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:25.502 00:34:25.502 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:25.502 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:25.502 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:25.760 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.760 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:25.760 01:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.760 01:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:25.760 01:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.760 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:25.760 { 00:34:25.760 "auth": { 00:34:25.760 "dhgroup": "ffdhe6144", 00:34:25.760 "digest": "sha512", 00:34:25.760 "state": "completed" 00:34:25.760 }, 00:34:25.760 "cntlid": 133, 00:34:25.760 "listen_address": { 00:34:25.760 "adrfam": "IPv4", 00:34:25.760 "traddr": "10.0.0.2", 00:34:25.760 "trsvcid": "4420", 00:34:25.760 "trtype": "TCP" 00:34:25.760 }, 00:34:25.760 "peer_address": { 00:34:25.760 "adrfam": "IPv4", 00:34:25.760 "traddr": "10.0.0.1", 00:34:25.760 "trsvcid": "35150", 00:34:25.760 "trtype": "TCP" 00:34:25.760 }, 00:34:25.760 "qid": 0, 00:34:25.760 "state": "enabled" 00:34:25.760 } 00:34:25.760 ]' 00:34:25.760 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:25.760 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:25.760 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:25.760 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:25.760 01:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:25.760 01:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:25.760 01:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:25.760 01:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:26.325 01:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:34:26.890 01:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:26.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:26.890 01:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:26.890 01:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.890 01:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:26.890 01:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.890 01:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:26.890 01:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:26.890 01:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:27.147 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:34:27.147 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:27.147 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:27.147 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:34:27.147 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:34:27.147 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:34:27.147 01:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.147 01:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:27.147 01:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.147 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:27.147 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:27.405 00:34:27.663 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:27.663 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:27.663 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:27.921 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.921 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:27.921 01:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.921 01:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:27.921 01:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.921 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:27.922 { 00:34:27.922 "auth": { 00:34:27.922 "dhgroup": "ffdhe6144", 00:34:27.922 "digest": "sha512", 00:34:27.922 "state": "completed" 00:34:27.922 }, 00:34:27.922 "cntlid": 135, 00:34:27.922 "listen_address": { 00:34:27.922 "adrfam": "IPv4", 00:34:27.922 "traddr": "10.0.0.2", 00:34:27.922 "trsvcid": "4420", 00:34:27.922 "trtype": "TCP" 00:34:27.922 }, 00:34:27.922 "peer_address": { 00:34:27.922 "adrfam": "IPv4", 00:34:27.922 "traddr": "10.0.0.1", 00:34:27.922 "trsvcid": "52946", 00:34:27.922 "trtype": "TCP" 00:34:27.922 }, 00:34:27.922 "qid": 0, 00:34:27.922 "state": "enabled" 00:34:27.922 } 00:34:27.922 ]' 00:34:27.922 01:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:27.922 01:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:27.922 01:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:27.922 01:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:27.922 01:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:27.922 01:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:27.922 01:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:27.922 01:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:28.181 01:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:29.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:29.114 01:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:30.048 00:34:30.048 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:30.048 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:30.048 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:30.048 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.048 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:30.048 01:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:30.048 01:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:30.049 01:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:30.049 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:30.049 { 00:34:30.049 "auth": { 00:34:30.049 "dhgroup": "ffdhe8192", 00:34:30.049 "digest": "sha512", 00:34:30.049 "state": "completed" 00:34:30.049 }, 00:34:30.049 "cntlid": 137, 00:34:30.049 "listen_address": { 00:34:30.049 "adrfam": "IPv4", 00:34:30.049 "traddr": "10.0.0.2", 00:34:30.049 "trsvcid": "4420", 00:34:30.049 "trtype": "TCP" 00:34:30.049 }, 00:34:30.049 "peer_address": { 00:34:30.049 "adrfam": "IPv4", 00:34:30.049 "traddr": "10.0.0.1", 00:34:30.049 "trsvcid": "52968", 00:34:30.049 "trtype": "TCP" 00:34:30.049 }, 00:34:30.049 "qid": 0, 00:34:30.049 "state": "enabled" 00:34:30.049 } 00:34:30.049 ]' 00:34:30.321 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:30.321 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:30.321 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:30.321 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:30.321 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:30.321 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:30.322 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:30.322 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:30.587 01:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:34:31.524 01:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:31.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:31.525 01:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:31.525 01:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.525 01:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:31.525 01:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.525 01:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:31.525 01:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:31.525 01:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:31.783 01:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:34:31.783 01:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:31.783 01:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:31.783 01:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:31.783 01:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:34:31.783 01:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:34:31.783 01:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.783 01:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:31.783 01:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.783 01:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:34:31.783 01:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:34:32.350 00:34:32.350 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:32.350 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:32.350 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:32.608 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.608 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:32.608 01:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:32.608 01:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:32.608 01:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:32.608 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:32.608 { 00:34:32.608 "auth": { 00:34:32.608 "dhgroup": "ffdhe8192", 00:34:32.608 "digest": "sha512", 00:34:32.608 "state": "completed" 00:34:32.608 }, 00:34:32.608 "cntlid": 139, 00:34:32.608 "listen_address": { 00:34:32.608 "adrfam": "IPv4", 00:34:32.608 "traddr": "10.0.0.2", 00:34:32.608 "trsvcid": "4420", 00:34:32.608 "trtype": "TCP" 00:34:32.608 }, 00:34:32.608 "peer_address": { 00:34:32.608 "adrfam": "IPv4", 00:34:32.608 "traddr": "10.0.0.1", 00:34:32.608 "trsvcid": "52996", 00:34:32.608 "trtype": "TCP" 00:34:32.608 }, 00:34:32.608 "qid": 0, 00:34:32.608 "state": "enabled" 00:34:32.608 } 00:34:32.608 ]' 00:34:32.608 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:32.608 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:32.608 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:32.866 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:32.866 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:32.866 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:32.866 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:32.866 01:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:33.124 01:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:01:OTRjODA1YWQ0NWJiZTE2N2MwNWJlNmI4ZTk2NGE3MmSJcWyz: 00:34:33.691 01:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:33.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:33.691 01:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:33.691 01:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.691 01:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:33.691 01:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.691 01:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:33.691 01:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:33.691 01:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:33.950 01:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:34:33.950 01:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:33.950 01:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:33.950 01:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:33.950 01:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:34:33.951 01:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key2 00:34:33.951 01:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.951 01:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:34.209 01:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:34.209 01:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:34.209 01:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:34.778 00:34:34.778 01:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:34.778 01:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:34.778 01:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:35.036 01:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.036 01:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:35.036 01:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.036 01:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.036 01:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.036 01:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:35.036 { 00:34:35.036 "auth": { 00:34:35.036 "dhgroup": "ffdhe8192", 00:34:35.036 "digest": "sha512", 00:34:35.036 "state": "completed" 00:34:35.036 }, 00:34:35.036 "cntlid": 141, 00:34:35.036 "listen_address": { 00:34:35.036 "adrfam": "IPv4", 00:34:35.036 "traddr": "10.0.0.2", 00:34:35.036 "trsvcid": "4420", 00:34:35.036 "trtype": "TCP" 00:34:35.036 }, 00:34:35.036 "peer_address": { 00:34:35.036 "adrfam": "IPv4", 00:34:35.036 "traddr": "10.0.0.1", 00:34:35.036 "trsvcid": "53024", 00:34:35.036 "trtype": "TCP" 00:34:35.036 }, 00:34:35.036 "qid": 0, 00:34:35.036 "state": "enabled" 00:34:35.036 } 00:34:35.036 ]' 00:34:35.036 01:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:35.311 01:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:35.311 01:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:35.311 01:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:35.312 01:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:35.312 01:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:35.312 01:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:35.312 01:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:35.570 01:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:02:ZWE5ZDBhNWQzYWYyNTRlMDI0Y2NjYjA2NzhjNWVjMjVlZTIwNWQzYWNjZDg2MDY1Ro8CZA==: 00:34:36.139 01:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:36.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:36.139 01:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:36.139 01:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:36.139 01:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:36.139 01:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:36.139 01:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:34:36.139 01:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:36.139 01:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:36.398 01:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:34:36.398 01:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:36.398 01:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:36.398 01:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:36.398 01:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:34:36.398 01:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key3 00:34:36.398 01:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:36.398 01:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:36.398 01:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:36.398 01:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:36.398 01:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:36.964 00:34:37.223 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:37.223 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:37.223 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:37.480 { 00:34:37.480 "auth": { 00:34:37.480 "dhgroup": "ffdhe8192", 00:34:37.480 "digest": "sha512", 00:34:37.480 "state": "completed" 00:34:37.480 }, 00:34:37.480 "cntlid": 143, 00:34:37.480 "listen_address": { 00:34:37.480 "adrfam": "IPv4", 00:34:37.480 "traddr": "10.0.0.2", 00:34:37.480 "trsvcid": "4420", 00:34:37.480 "trtype": "TCP" 00:34:37.480 }, 00:34:37.480 "peer_address": { 00:34:37.480 "adrfam": "IPv4", 00:34:37.480 "traddr": "10.0.0.1", 00:34:37.480 "trsvcid": "53042", 00:34:37.480 "trtype": "TCP" 00:34:37.480 }, 00:34:37.480 "qid": 0, 00:34:37.480 "state": "enabled" 00:34:37.480 } 00:34:37.480 ]' 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:37.480 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:37.738 01:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:03:MGUzYzAzOWNiODI2NjBhOGMyMmZlYjM1YTBkYmI5ZDM5MDRkNGJhY2E5MTMwYzBiZTRjNjUzZWQ0NDUxZjUwOdc61us=: 00:34:38.674 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:38.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:38.674 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:38.674 01:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.674 01:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:38.674 01:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:38.674 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:34:38.674 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:34:38.674 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:34:38.674 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:38.674 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:38.674 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:38.932 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:34:38.932 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:34:38.932 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:38.932 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:38.932 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:34:38.932 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key0 00:34:38.932 01:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:38.932 01:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:38.932 01:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:38.932 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:38.932 01:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:39.499 00:34:39.499 01:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:34:39.499 01:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:34:39.499 01:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:39.756 01:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.756 01:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:39.756 01:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:39.756 01:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:39.756 01:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:39.756 01:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:34:39.756 { 00:34:39.756 "auth": { 00:34:39.756 "dhgroup": "ffdhe8192", 00:34:39.756 "digest": "sha512", 00:34:39.756 "state": "completed" 00:34:39.756 }, 00:34:39.756 "cntlid": 145, 00:34:39.756 "listen_address": { 00:34:39.756 "adrfam": "IPv4", 00:34:39.756 "traddr": "10.0.0.2", 00:34:39.756 "trsvcid": "4420", 00:34:39.756 "trtype": "TCP" 00:34:39.756 }, 00:34:39.756 "peer_address": { 00:34:39.756 "adrfam": "IPv4", 00:34:39.756 "traddr": "10.0.0.1", 00:34:39.756 "trsvcid": "43824", 00:34:39.756 "trtype": "TCP" 00:34:39.756 }, 00:34:39.756 "qid": 0, 00:34:39.756 "state": "enabled" 00:34:39.756 } 00:34:39.756 ]' 00:34:39.756 01:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:34:39.757 01:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:39.757 01:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:34:40.015 01:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:40.015 01:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:34:40.015 01:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:40.015 01:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:40.015 01:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:40.286 01:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid 805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-secret DHHC-1:00:Mjc0ODNkZDJlNDhlNjdkZTc5NzdlZjNmZTE4MGY0NTljY2YyYjhkNDEyZTlhNTk0ZtOJBw==: 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:40.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --dhchap-key key1 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:40.858 01:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:41.425 2024/05/15 01:00:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:34:41.425 request: 00:34:41.425 { 00:34:41.425 "method": "bdev_nvme_attach_controller", 00:34:41.425 "params": { 00:34:41.425 "name": "nvme0", 00:34:41.425 "trtype": "tcp", 00:34:41.425 "traddr": "10.0.0.2", 00:34:41.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5", 00:34:41.425 "adrfam": "ipv4", 00:34:41.425 "trsvcid": "4420", 00:34:41.425 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:41.425 "dhchap_key": "key2" 00:34:41.425 } 00:34:41.425 } 00:34:41.425 Got JSON-RPC error response 00:34:41.425 GoRPCClient: error on JSON-RPC call 00:34:41.425 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:34:41.425 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:41.425 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:41.425 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:41.425 01:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:41.425 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:41.425 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.425 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:41.425 01:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:34:41.425 01:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:34:41.426 01:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 93721 00:34:41.426 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 93721 ']' 00:34:41.426 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 93721 00:34:41.426 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:34:41.426 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:34:41.426 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 93721 00:34:41.684 killing process with pid 93721 00:34:41.684 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:34:41.684 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:34:41.684 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 93721' 00:34:41.684 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 93721 00:34:41.684 01:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 93721 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:41.964 rmmod nvme_tcp 00:34:41.964 rmmod nvme_fabrics 00:34:41.964 rmmod nvme_keyring 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 93677 ']' 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 93677 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 93677 ']' 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 93677 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 93677 00:34:41.964 killing process with pid 93677 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 93677' 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 93677 00:34:41.964 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 93677 00:34:42.223 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:42.223 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:42.223 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:42.223 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:42.223 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:42.223 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.223 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:42.223 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:42.223 01:00:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:34:42.223 01:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.xdg /tmp/spdk.key-sha256.SW4 /tmp/spdk.key-sha384.AZB /tmp/spdk.key-sha512.Gv5 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:34:42.223 00:34:42.223 real 2m45.197s 00:34:42.223 user 6m40.209s 00:34:42.223 sys 0m21.441s 00:34:42.223 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:34:42.223 01:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:42.223 ************************************ 00:34:42.223 END TEST nvmf_auth_target 00:34:42.223 ************************************ 00:34:42.482 01:00:45 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:34:42.482 01:00:45 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:34:42.482 01:00:45 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:34:42.482 01:00:45 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:34:42.482 01:00:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.482 ************************************ 00:34:42.482 START TEST nvmf_bdevio_no_huge 00:34:42.482 ************************************ 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:34:42.482 * Looking for test storage... 00:34:42.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:42.482 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:34:42.483 Cannot find device "nvmf_tgt_br" 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:34:42.483 Cannot find device "nvmf_tgt_br2" 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:34:42.483 Cannot find device "nvmf_tgt_br" 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:34:42.483 Cannot find device "nvmf_tgt_br2" 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:34:42.483 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:42.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:42.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:34:42.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:42.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:34:42.758 00:34:42.758 --- 10.0.0.2 ping statistics --- 00:34:42.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.758 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:34:42.758 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:42.758 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:34:42.758 00:34:42.758 --- 10.0.0.3 ping statistics --- 00:34:42.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.758 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:42.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:42.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:34:42.758 00:34:42.758 --- 10.0.0.1 ping statistics --- 00:34:42.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.758 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:42.758 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@721 -- # xtrace_disable 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=98671 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 98671 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # '[' -z 98671 ']' 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local max_retries=100 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:42.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # xtrace_disable 00:34:42.759 01:00:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:43.021 [2024-05-15 01:00:46.041965] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:34:43.021 [2024-05-15 01:00:46.042779] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:34:43.021 [2024-05-15 01:00:46.188998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:43.277 [2024-05-15 01:00:46.317589] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.278 [2024-05-15 01:00:46.317664] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.278 [2024-05-15 01:00:46.317677] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:43.278 [2024-05-15 01:00:46.317686] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:43.278 [2024-05-15 01:00:46.317693] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.278 [2024-05-15 01:00:46.317807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:43.278 [2024-05-15 01:00:46.317889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:43.278 [2024-05-15 01:00:46.318038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:43.278 [2024-05-15 01:00:46.318464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:43.877 01:00:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:34:43.878 01:00:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@861 -- # return 0 00:34:43.878 01:00:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:43.878 01:00:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@727 -- # xtrace_disable 00:34:43.878 01:00:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:43.878 [2024-05-15 01:00:47.014961] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:43.878 Malloc0 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:43.878 [2024-05-15 01:00:47.054929] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:43.878 [2024-05-15 01:00:47.055615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:43.878 { 00:34:43.878 "params": { 00:34:43.878 "name": "Nvme$subsystem", 00:34:43.878 "trtype": "$TEST_TRANSPORT", 00:34:43.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.878 "adrfam": "ipv4", 00:34:43.878 "trsvcid": "$NVMF_PORT", 00:34:43.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.878 "hdgst": ${hdgst:-false}, 00:34:43.878 "ddgst": ${ddgst:-false} 00:34:43.878 }, 00:34:43.878 "method": "bdev_nvme_attach_controller" 00:34:43.878 } 00:34:43.878 EOF 00:34:43.878 )") 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:34:43.878 01:00:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:43.878 "params": { 00:34:43.878 "name": "Nvme1", 00:34:43.878 "trtype": "tcp", 00:34:43.878 "traddr": "10.0.0.2", 00:34:43.878 "adrfam": "ipv4", 00:34:43.878 "trsvcid": "4420", 00:34:43.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:43.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:43.878 "hdgst": false, 00:34:43.878 "ddgst": false 00:34:43.878 }, 00:34:43.878 "method": "bdev_nvme_attach_controller" 00:34:43.878 }' 00:34:43.878 [2024-05-15 01:00:47.107224] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:34:43.878 [2024-05-15 01:00:47.107316] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid98725 ] 00:34:44.135 [2024-05-15 01:00:47.241335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:44.135 [2024-05-15 01:00:47.352981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.135 [2024-05-15 01:00:47.353118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:44.135 [2024-05-15 01:00:47.353125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.392 I/O targets: 00:34:44.392 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:44.392 00:34:44.392 00:34:44.392 CUnit - A unit testing framework for C - Version 2.1-3 00:34:44.392 http://cunit.sourceforge.net/ 00:34:44.392 00:34:44.392 00:34:44.392 Suite: bdevio tests on: Nvme1n1 00:34:44.392 Test: blockdev write read block ...passed 00:34:44.392 Test: blockdev write zeroes read block ...passed 00:34:44.392 Test: blockdev write zeroes read no split ...passed 00:34:44.392 Test: blockdev write zeroes read split ...passed 00:34:44.392 Test: blockdev write zeroes read split partial ...passed 00:34:44.392 Test: blockdev reset ...[2024-05-15 01:00:47.667483] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:44.392 [2024-05-15 01:00:47.667641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86040 (9): Bad file descriptor 00:34:44.650 [2024-05-15 01:00:47.686654] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:44.650 passed 00:34:44.650 Test: blockdev write read 8 blocks ...passed 00:34:44.650 Test: blockdev write read size > 128k ...passed 00:34:44.650 Test: blockdev write read invalid size ...passed 00:34:44.650 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:44.650 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:44.650 Test: blockdev write read max offset ...passed 00:34:44.650 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:44.650 Test: blockdev writev readv 8 blocks ...passed 00:34:44.650 Test: blockdev writev readv 30 x 1block ...passed 00:34:44.650 Test: blockdev writev readv block ...passed 00:34:44.650 Test: blockdev writev readv size > 128k ...passed 00:34:44.650 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:44.650 Test: blockdev comparev and writev ...[2024-05-15 01:00:47.866080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.650 [2024-05-15 01:00:47.866271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:44.650 [2024-05-15 01:00:47.866467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.650 [2024-05-15 01:00:47.866594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:44.650 [2024-05-15 01:00:47.867085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.650 [2024-05-15 01:00:47.867243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:44.650 [2024-05-15 01:00:47.867414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.650 [2024-05-15 01:00:47.867532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:44.650 [2024-05-15 01:00:47.867883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.650 [2024-05-15 01:00:47.867910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:44.650 [2024-05-15 01:00:47.867930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.650 [2024-05-15 01:00:47.867941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:44.650 [2024-05-15 01:00:47.868239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.650 [2024-05-15 01:00:47.868255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:44.650 [2024-05-15 01:00:47.868271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.650 [2024-05-15 01:00:47.868283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:44.650 passed 00:34:44.907 Test: blockdev nvme passthru rw ...passed 00:34:44.907 Test: blockdev nvme passthru vendor specific ...[2024-05-15 01:00:47.951001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:44.907 [2024-05-15 01:00:47.951051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:44.907 [2024-05-15 01:00:47.951217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:44.907 [2024-05-15 01:00:47.951234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:44.907 passed 00:34:44.907 Test: blockdev nvme admin passthru ...[2024-05-15 01:00:47.951362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:44.907 [2024-05-15 01:00:47.951383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:44.907 [2024-05-15 01:00:47.951509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:44.907 [2024-05-15 01:00:47.951524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:44.907 passed 00:34:44.907 Test: blockdev copy ...passed 00:34:44.907 00:34:44.907 Run Summary: Type Total Ran Passed Failed Inactive 00:34:44.907 suites 1 1 n/a 0 0 00:34:44.907 tests 23 23 23 0 0 00:34:44.907 asserts 152 152 152 0 n/a 00:34:44.907 00:34:44.907 Elapsed time = 0.943 seconds 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:45.164 rmmod nvme_tcp 00:34:45.164 rmmod nvme_fabrics 00:34:45.164 rmmod nvme_keyring 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 98671 ']' 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 98671 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' -z 98671 ']' 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # kill -0 98671 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # uname 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:34:45.164 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 98671 00:34:45.423 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:34:45.423 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:34:45.423 killing process with pid 98671 00:34:45.423 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # echo 'killing process with pid 98671' 00:34:45.423 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # kill 98671 00:34:45.423 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # wait 98671 00:34:45.423 [2024-05-15 01:00:48.459843] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:45.681 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:45.681 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:45.681 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:45.681 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:45.681 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:45.681 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.681 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:45.681 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.681 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:34:45.681 00:34:45.681 real 0m3.347s 00:34:45.681 user 0m11.812s 00:34:45.681 sys 0m1.330s 00:34:45.681 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # xtrace_disable 00:34:45.681 01:00:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:45.681 ************************************ 00:34:45.681 END TEST nvmf_bdevio_no_huge 00:34:45.681 ************************************ 00:34:45.681 01:00:48 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:34:45.681 01:00:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:34:45.681 01:00:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:34:45.681 01:00:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.681 ************************************ 00:34:45.681 START TEST nvmf_tls 00:34:45.681 ************************************ 00:34:45.681 01:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:34:45.958 * Looking for test storage... 00:34:45.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:45.958 01:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:45.958 01:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:45.958 01:00:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:34:45.959 Cannot find device "nvmf_tgt_br" 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:34:45.959 Cannot find device "nvmf_tgt_br2" 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:34:45.959 Cannot find device "nvmf_tgt_br" 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:34:45.959 Cannot find device "nvmf_tgt_br2" 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:45.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:45.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:45.959 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:34:46.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:46.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:34:46.218 00:34:46.218 --- 10.0.0.2 ping statistics --- 00:34:46.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.218 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:34:46.218 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:34:46.218 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:46.218 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:34:46.218 00:34:46.218 --- 10.0.0.3 ping statistics --- 00:34:46.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.219 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:46.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:46.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:34:46.219 00:34:46.219 --- 10.0.0.1 ping statistics --- 00:34:46.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.219 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=98910 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 98910 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 98910 ']' 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:34:46.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:34:46.219 01:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:46.219 [2024-05-15 01:00:49.475512] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:34:46.219 [2024-05-15 01:00:49.475640] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:46.477 [2024-05-15 01:00:49.615473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.477 [2024-05-15 01:00:49.721384] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:46.477 [2024-05-15 01:00:49.721465] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:46.477 [2024-05-15 01:00:49.721484] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:46.477 [2024-05-15 01:00:49.721499] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:46.477 [2024-05-15 01:00:49.721510] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:46.477 [2024-05-15 01:00:49.721544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.419 01:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:34:47.419 01:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:34:47.419 01:00:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:47.419 01:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:34:47.419 01:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:47.419 01:00:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:47.419 01:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:34:47.419 01:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:34:47.419 true 00:34:47.679 01:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:47.679 01:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:34:47.937 01:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:34:47.937 01:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:34:47.937 01:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:34:48.194 01:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:34:48.194 01:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:48.194 01:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:34:48.194 01:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:34:48.194 01:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:34:48.761 01:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:48.761 01:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:34:48.761 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:34:48.761 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:34:48.761 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:48.761 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:34:49.019 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:34:49.019 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:34:49.019 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:34:49.278 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:49.278 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:34:49.536 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:34:49.536 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:34:49.536 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:34:49.795 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:49.795 01:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:34:50.053 01:00:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:34:50.319 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:34:50.319 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:34:50.319 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.eTIIZlZ31S 00:34:50.319 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:34:50.319 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.w9ewW0btZe 00:34:50.319 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:34:50.319 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:34:50.319 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.eTIIZlZ31S 00:34:50.320 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.w9ewW0btZe 00:34:50.320 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:34:50.320 01:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:34:50.889 01:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.eTIIZlZ31S 00:34:50.889 01:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eTIIZlZ31S 00:34:50.889 01:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:34:51.148 [2024-05-15 01:00:54.239291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:51.148 01:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:34:51.406 01:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:34:51.664 [2024-05-15 01:00:54.763362] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:51.664 [2024-05-15 01:00:54.763471] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:51.664 [2024-05-15 01:00:54.763675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.664 01:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:34:51.922 malloc0 00:34:51.922 01:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:52.180 01:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eTIIZlZ31S 00:34:52.439 [2024-05-15 01:00:55.602811] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:34:52.439 01:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.eTIIZlZ31S 00:35:04.661 Initializing NVMe Controllers 00:35:04.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:04.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:04.661 Initialization complete. Launching workers. 00:35:04.661 ======================================================== 00:35:04.661 Latency(us) 00:35:04.661 Device Information : IOPS MiB/s Average min max 00:35:04.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9414.68 36.78 6799.54 1413.10 9709.85 00:35:04.661 ======================================================== 00:35:04.661 Total : 9414.68 36.78 6799.54 1413.10 9709.85 00:35:04.661 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eTIIZlZ31S 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eTIIZlZ31S' 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99269 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99269 /var/tmp/bdevperf.sock 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 99269 ']' 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:04.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:04.661 01:01:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:04.661 [2024-05-15 01:01:05.854399] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:04.662 [2024-05-15 01:01:05.855171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99269 ] 00:35:04.662 [2024-05-15 01:01:05.992935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.662 [2024-05-15 01:01:06.098392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:04.662 01:01:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:04.662 01:01:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:04.662 01:01:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eTIIZlZ31S 00:35:04.662 [2024-05-15 01:01:06.474592] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:04.662 [2024-05-15 01:01:06.474723] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:04.662 TLSTESTn1 00:35:04.662 01:01:06 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:35:04.662 Running I/O for 10 seconds... 00:35:14.678 00:35:14.678 Latency(us) 00:35:14.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.678 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:14.678 Verification LBA range: start 0x0 length 0x2000 00:35:14.678 TLSTESTn1 : 10.02 3940.76 15.39 0.00 0.00 32416.15 6911.07 23712.12 00:35:14.678 =================================================================================================================== 00:35:14.678 Total : 3940.76 15.39 0.00 0.00 32416.15 6911.07 23712.12 00:35:14.678 0 00:35:14.678 01:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:14.678 01:01:16 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 99269 00:35:14.678 01:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 99269 ']' 00:35:14.678 01:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 99269 00:35:14.678 01:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:14.678 01:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:14.678 01:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 99269 00:35:14.678 killing process with pid 99269 00:35:14.678 Received shutdown signal, test time was about 10.000000 seconds 00:35:14.678 00:35:14.678 Latency(us) 00:35:14.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.678 =================================================================================================================== 00:35:14.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.678 01:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:35:14.678 01:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:35:14.678 01:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 99269' 00:35:14.678 01:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 99269 00:35:14.678 [2024-05-15 01:01:16.797829] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:14.678 01:01:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 99269 00:35:14.678 01:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.w9ewW0btZe 00:35:14.678 01:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:35:14.678 01:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.w9ewW0btZe 00:35:14.678 01:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:35:14.678 01:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:14.678 01:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:35:14.678 01:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:14.678 01:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.w9ewW0btZe 00:35:14.678 01:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:14.678 01:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:14.679 01:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:14.679 01:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.w9ewW0btZe' 00:35:14.679 01:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:14.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:14.679 01:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99407 00:35:14.679 01:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:14.679 01:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99407 /var/tmp/bdevperf.sock 00:35:14.679 01:01:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:14.679 01:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 99407 ']' 00:35:14.679 01:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:14.679 01:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:14.679 01:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:14.679 01:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:14.679 01:01:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:14.679 [2024-05-15 01:01:17.068195] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:14.679 [2024-05-15 01:01:17.068482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99407 ] 00:35:14.679 [2024-05-15 01:01:17.204848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.679 [2024-05-15 01:01:17.302986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:14.943 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:14.943 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:14.943 01:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.w9ewW0btZe 00:35:15.203 [2024-05-15 01:01:18.302492] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:15.203 [2024-05-15 01:01:18.302666] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:15.203 [2024-05-15 01:01:18.307766] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:15.203 [2024-05-15 01:01:18.308362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7fdc0 (107): Transport endpoint is not connected 00:35:15.203 [2024-05-15 01:01:18.309346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7fdc0 (9): Bad file descriptor 00:35:15.203 [2024-05-15 01:01:18.310341] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.203 [2024-05-15 01:01:18.310380] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:35:15.203 [2024-05-15 01:01:18.310400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.203 2024/05/15 01:01:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.w9ewW0btZe subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:35:15.203 request: 00:35:15.203 { 00:35:15.203 "method": "bdev_nvme_attach_controller", 00:35:15.203 "params": { 00:35:15.203 "name": "TLSTEST", 00:35:15.203 "trtype": "tcp", 00:35:15.203 "traddr": "10.0.0.2", 00:35:15.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:15.203 "adrfam": "ipv4", 00:35:15.203 "trsvcid": "4420", 00:35:15.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:15.203 "psk": "/tmp/tmp.w9ewW0btZe" 00:35:15.203 } 00:35:15.203 } 00:35:15.203 Got JSON-RPC error response 00:35:15.203 GoRPCClient: error on JSON-RPC call 00:35:15.203 01:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99407 00:35:15.203 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 99407 ']' 00:35:15.203 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 99407 00:35:15.203 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:15.203 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:15.203 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 99407 00:35:15.203 killing process with pid 99407 00:35:15.203 Received shutdown signal, test time was about 10.000000 seconds 00:35:15.203 00:35:15.203 Latency(us) 00:35:15.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.203 =================================================================================================================== 00:35:15.203 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:15.203 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:35:15.203 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:35:15.203 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 99407' 00:35:15.203 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 99407 00:35:15.203 [2024-05-15 01:01:18.359040] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:15.203 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 99407 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eTIIZlZ31S 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eTIIZlZ31S 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eTIIZlZ31S 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eTIIZlZ31S' 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99449 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99449 /var/tmp/bdevperf.sock 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 99449 ']' 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:15.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:15.462 01:01:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:15.462 [2024-05-15 01:01:18.615879] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:15.462 [2024-05-15 01:01:18.615988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99449 ] 00:35:15.722 [2024-05-15 01:01:18.753086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.722 [2024-05-15 01:01:18.851345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.eTIIZlZ31S 00:35:16.660 [2024-05-15 01:01:19.853029] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:16.660 [2024-05-15 01:01:19.853143] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:16.660 [2024-05-15 01:01:19.858042] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:35:16.660 [2024-05-15 01:01:19.858077] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:35:16.660 [2024-05-15 01:01:19.858127] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:16.660 [2024-05-15 01:01:19.858754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca5dc0 (107): Transport endpoint is not connected 00:35:16.660 [2024-05-15 01:01:19.859741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca5dc0 (9): Bad file descriptor 00:35:16.660 [2024-05-15 01:01:19.860736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.660 [2024-05-15 01:01:19.860762] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:35:16.660 [2024-05-15 01:01:19.860773] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.660 2024/05/15 01:01:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.eTIIZlZ31S subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:35:16.660 request: 00:35:16.660 { 00:35:16.660 "method": "bdev_nvme_attach_controller", 00:35:16.660 "params": { 00:35:16.660 "name": "TLSTEST", 00:35:16.660 "trtype": "tcp", 00:35:16.660 "traddr": "10.0.0.2", 00:35:16.660 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:16.660 "adrfam": "ipv4", 00:35:16.660 "trsvcid": "4420", 00:35:16.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:16.660 "psk": "/tmp/tmp.eTIIZlZ31S" 00:35:16.660 } 00:35:16.660 } 00:35:16.660 Got JSON-RPC error response 00:35:16.660 GoRPCClient: error on JSON-RPC call 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99449 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 99449 ']' 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 99449 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 99449 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:35:16.660 killing process with pid 99449 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 99449' 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 99449 00:35:16.660 Received shutdown signal, test time was about 10.000000 seconds 00:35:16.660 00:35:16.660 Latency(us) 00:35:16.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.660 =================================================================================================================== 00:35:16.660 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:16.660 [2024-05-15 01:01:19.901851] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:16.660 01:01:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 99449 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eTIIZlZ31S 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eTIIZlZ31S 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eTIIZlZ31S 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eTIIZlZ31S' 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99495 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99495 /var/tmp/bdevperf.sock 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 99495 ']' 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:16.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:16.919 01:01:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:16.919 [2024-05-15 01:01:20.158949] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:16.919 [2024-05-15 01:01:20.159041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99495 ] 00:35:17.177 [2024-05-15 01:01:20.296631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.177 [2024-05-15 01:01:20.393522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eTIIZlZ31S 00:35:18.154 [2024-05-15 01:01:21.351558] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:18.154 [2024-05-15 01:01:21.351731] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:18.154 [2024-05-15 01:01:21.356669] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:35:18.154 [2024-05-15 01:01:21.356709] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:35:18.154 [2024-05-15 01:01:21.356760] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:18.154 [2024-05-15 01:01:21.357359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120bdc0 (107): Transport endpoint is not connected 00:35:18.154 [2024-05-15 01:01:21.358346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120bdc0 (9): Bad file descriptor 00:35:18.154 [2024-05-15 01:01:21.359342] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:35:18.154 [2024-05-15 01:01:21.359365] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:35:18.154 [2024-05-15 01:01:21.359376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:35:18.154 2024/05/15 01:01:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.eTIIZlZ31S subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:35:18.154 request: 00:35:18.154 { 00:35:18.154 "method": "bdev_nvme_attach_controller", 00:35:18.154 "params": { 00:35:18.154 "name": "TLSTEST", 00:35:18.154 "trtype": "tcp", 00:35:18.154 "traddr": "10.0.0.2", 00:35:18.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:18.154 "adrfam": "ipv4", 00:35:18.154 "trsvcid": "4420", 00:35:18.154 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:18.154 "psk": "/tmp/tmp.eTIIZlZ31S" 00:35:18.154 } 00:35:18.154 } 00:35:18.154 Got JSON-RPC error response 00:35:18.154 GoRPCClient: error on JSON-RPC call 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99495 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 99495 ']' 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 99495 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 99495 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:35:18.154 killing process with pid 99495 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 99495' 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 99495 00:35:18.154 Received shutdown signal, test time was about 10.000000 seconds 00:35:18.154 00:35:18.154 Latency(us) 00:35:18.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.154 =================================================================================================================== 00:35:18.154 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:18.154 [2024-05-15 01:01:21.410217] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:18.154 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 99495 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99535 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99535 /var/tmp/bdevperf.sock 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:18.413 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 99535 ']' 00:35:18.414 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:18.414 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:18.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:18.414 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:18.414 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:18.414 01:01:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:18.414 [2024-05-15 01:01:21.657803] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:18.414 [2024-05-15 01:01:21.657899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99535 ] 00:35:18.682 [2024-05-15 01:01:21.794124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.682 [2024-05-15 01:01:21.890826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:19.644 01:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:19.644 01:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:19.644 01:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:35:19.904 [2024-05-15 01:01:22.936156] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:19.904 [2024-05-15 01:01:22.937529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169aae0 (9): Bad file descriptor 00:35:19.904 [2024-05-15 01:01:22.938525] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.904 [2024-05-15 01:01:22.938546] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:35:19.904 [2024-05-15 01:01:22.938557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.904 2024/05/15 01:01:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:35:19.904 request: 00:35:19.904 { 00:35:19.904 "method": "bdev_nvme_attach_controller", 00:35:19.904 "params": { 00:35:19.904 "name": "TLSTEST", 00:35:19.904 "trtype": "tcp", 00:35:19.904 "traddr": "10.0.0.2", 00:35:19.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:19.904 "adrfam": "ipv4", 00:35:19.904 "trsvcid": "4420", 00:35:19.904 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:35:19.904 } 00:35:19.904 } 00:35:19.904 Got JSON-RPC error response 00:35:19.904 GoRPCClient: error on JSON-RPC call 00:35:19.904 01:01:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99535 00:35:19.904 01:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 99535 ']' 00:35:19.904 01:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 99535 00:35:19.904 01:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:19.905 01:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:19.905 01:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 99535 00:35:19.905 01:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:35:19.905 01:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:35:19.905 01:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 99535' 00:35:19.905 killing process with pid 99535 00:35:19.905 01:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 99535 00:35:19.905 Received shutdown signal, test time was about 10.000000 seconds 00:35:19.905 00:35:19.905 Latency(us) 00:35:19.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.905 =================================================================================================================== 00:35:19.905 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:19.905 01:01:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 99535 00:35:19.905 01:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:35:19.905 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:35:19.905 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:19.905 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:19.905 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:19.905 01:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 98910 00:35:19.905 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 98910 ']' 00:35:19.905 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 98910 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 98910 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:35:20.164 killing process with pid 98910 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 98910' 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 98910 00:35:20.164 [2024-05-15 01:01:23.218640] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:20.164 [2024-05-15 01:01:23.218680] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 98910 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:35:20.164 01:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.nM8lqbv8dG 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.nM8lqbv8dG 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99598 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99598 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 99598 ']' 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:20.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:20.422 01:01:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:20.422 [2024-05-15 01:01:23.552307] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:20.422 [2024-05-15 01:01:23.552415] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:20.422 [2024-05-15 01:01:23.692527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.681 [2024-05-15 01:01:23.787626] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:20.681 [2024-05-15 01:01:23.787696] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:20.681 [2024-05-15 01:01:23.787708] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:20.681 [2024-05-15 01:01:23.787717] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:20.681 [2024-05-15 01:01:23.787724] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:20.681 [2024-05-15 01:01:23.787752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:21.250 01:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:21.250 01:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:21.250 01:01:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:21.250 01:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:35:21.250 01:01:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:21.510 01:01:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:21.510 01:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.nM8lqbv8dG 00:35:21.510 01:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nM8lqbv8dG 00:35:21.510 01:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:21.772 [2024-05-15 01:01:24.817389] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:21.772 01:01:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:22.031 01:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:35:22.290 [2024-05-15 01:01:25.357462] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:22.290 [2024-05-15 01:01:25.357567] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:22.290 [2024-05-15 01:01:25.357766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:22.290 01:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:22.550 malloc0 00:35:22.550 01:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:22.809 01:01:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nM8lqbv8dG 00:35:23.067 [2024-05-15 01:01:26.136689] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:23.067 01:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nM8lqbv8dG 00:35:23.067 01:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:23.067 01:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:23.067 01:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:23.067 01:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nM8lqbv8dG' 00:35:23.068 01:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:23.068 01:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99701 00:35:23.068 01:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:23.068 01:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:23.068 01:01:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99701 /var/tmp/bdevperf.sock 00:35:23.068 01:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 99701 ']' 00:35:23.068 01:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:23.068 01:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:23.068 01:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:23.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:23.068 01:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:23.068 01:01:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:23.068 [2024-05-15 01:01:26.203195] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:23.068 [2024-05-15 01:01:26.203293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99701 ] 00:35:23.068 [2024-05-15 01:01:26.336818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.326 [2024-05-15 01:01:26.434914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:24.260 01:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:24.260 01:01:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:24.260 01:01:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nM8lqbv8dG 00:35:24.260 [2024-05-15 01:01:27.442384] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:24.260 [2024-05-15 01:01:27.442529] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:24.260 TLSTESTn1 00:35:24.260 01:01:27 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:35:24.519 Running I/O for 10 seconds... 00:35:34.492 00:35:34.492 Latency(us) 00:35:34.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.492 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:34.492 Verification LBA range: start 0x0 length 0x2000 00:35:34.492 TLSTESTn1 : 10.03 3799.19 14.84 0.00 0.00 33604.52 7119.59 41943.04 00:35:34.492 =================================================================================================================== 00:35:34.492 Total : 3799.19 14.84 0.00 0.00 33604.52 7119.59 41943.04 00:35:34.492 0 00:35:34.492 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:34.492 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 99701 00:35:34.492 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 99701 ']' 00:35:34.492 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 99701 00:35:34.492 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:34.492 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:34.492 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 99701 00:35:34.492 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:35:34.492 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:35:34.492 killing process with pid 99701 00:35:34.492 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 99701' 00:35:34.492 Received shutdown signal, test time was about 10.000000 seconds 00:35:34.492 00:35:34.492 Latency(us) 00:35:34.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.492 =================================================================================================================== 00:35:34.492 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:34.492 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 99701 00:35:34.492 [2024-05-15 01:01:37.705288] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:34.492 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 99701 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.nM8lqbv8dG 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nM8lqbv8dG 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nM8lqbv8dG 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nM8lqbv8dG 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nM8lqbv8dG' 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99850 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99850 /var/tmp/bdevperf.sock 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 99850 ']' 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:34.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:34.749 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:34.749 [2024-05-15 01:01:37.998413] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:34.749 [2024-05-15 01:01:37.999804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99850 ] 00:35:35.007 [2024-05-15 01:01:38.151892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.007 [2024-05-15 01:01:38.254198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:35.941 01:01:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:35.941 01:01:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:35.941 01:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nM8lqbv8dG 00:35:35.941 [2024-05-15 01:01:39.194871] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:35.941 [2024-05-15 01:01:39.194955] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:35:35.941 [2024-05-15 01:01:39.194966] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.nM8lqbv8dG 00:35:35.941 2024/05/15 01:01:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.nM8lqbv8dG subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:35:35.941 request: 00:35:35.941 { 00:35:35.941 "method": "bdev_nvme_attach_controller", 00:35:35.941 "params": { 00:35:35.941 "name": "TLSTEST", 00:35:35.941 "trtype": "tcp", 00:35:35.941 "traddr": "10.0.0.2", 00:35:35.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:35.941 "adrfam": "ipv4", 00:35:35.941 "trsvcid": "4420", 00:35:35.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:35.941 "psk": "/tmp/tmp.nM8lqbv8dG" 00:35:35.941 } 00:35:35.941 } 00:35:35.941 Got JSON-RPC error response 00:35:35.941 GoRPCClient: error on JSON-RPC call 00:35:35.941 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99850 00:35:35.941 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 99850 ']' 00:35:35.941 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 99850 00:35:35.941 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:35.941 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:35.941 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 99850 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:35:36.199 killing process with pid 99850 00:35:36.199 Received shutdown signal, test time was about 10.000000 seconds 00:35:36.199 00:35:36.199 Latency(us) 00:35:36.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.199 =================================================================================================================== 00:35:36.199 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 99850' 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 99850 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 99850 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 99598 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 99598 ']' 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 99598 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 99598 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:35:36.199 killing process with pid 99598 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 99598' 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 99598 00:35:36.199 [2024-05-15 01:01:39.457225] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:36.199 [2024-05-15 01:01:39.457272] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:36.199 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 99598 00:35:36.456 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:35:36.456 01:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:36.456 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:35:36.456 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:36.456 01:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99901 00:35:36.456 01:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:36.456 01:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99901 00:35:36.456 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 99901 ']' 00:35:36.456 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:36.456 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:36.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:36.456 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:36.456 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:36.456 01:01:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:36.456 [2024-05-15 01:01:39.730371] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:36.456 [2024-05-15 01:01:39.730484] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:36.713 [2024-05-15 01:01:39.868099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.713 [2024-05-15 01:01:39.964095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:36.713 [2024-05-15 01:01:39.964154] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:36.713 [2024-05-15 01:01:39.964166] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:36.713 [2024-05-15 01:01:39.964175] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:36.713 [2024-05-15 01:01:39.964183] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:36.713 [2024-05-15 01:01:39.964218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.649 01:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:37.649 01:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:37.649 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:37.650 01:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:35:37.650 01:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:37.650 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.650 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.nM8lqbv8dG 00:35:37.650 01:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:35:37.650 01:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.nM8lqbv8dG 00:35:37.650 01:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:35:37.650 01:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:37.650 01:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:35:37.650 01:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:37.650 01:01:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.nM8lqbv8dG 00:35:37.650 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nM8lqbv8dG 00:35:37.650 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:37.997 [2024-05-15 01:01:40.942018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:37.997 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:38.256 01:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:35:38.256 [2024-05-15 01:01:41.530135] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:38.256 [2024-05-15 01:01:41.530250] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:38.256 [2024-05-15 01:01:41.530446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:38.515 01:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:38.774 malloc0 00:35:38.774 01:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:39.034 01:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nM8lqbv8dG 00:35:39.034 [2024-05-15 01:01:42.313514] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:35:39.034 [2024-05-15 01:01:42.313561] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:35:39.034 [2024-05-15 01:01:42.313594] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:35:39.034 2024/05/15 01:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.nM8lqbv8dG], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:35:39.034 request: 00:35:39.034 { 00:35:39.034 "method": "nvmf_subsystem_add_host", 00:35:39.034 "params": { 00:35:39.034 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.034 "host": "nqn.2016-06.io.spdk:host1", 00:35:39.034 "psk": "/tmp/tmp.nM8lqbv8dG" 00:35:39.034 } 00:35:39.034 } 00:35:39.034 Got JSON-RPC error response 00:35:39.034 GoRPCClient: error on JSON-RPC call 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 99901 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 99901 ']' 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 99901 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 99901 00:35:39.292 killing process with pid 99901 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 99901' 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 99901 00:35:39.292 [2024-05-15 01:01:42.358491] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 99901 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.nM8lqbv8dG 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:35:39.292 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:39.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:39.550 01:01:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100013 00:35:39.550 01:01:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100013 00:35:39.550 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 100013 ']' 00:35:39.550 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:39.550 01:01:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:39.550 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:39.550 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:39.550 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:39.550 01:01:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:39.550 [2024-05-15 01:01:42.632298] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:39.550 [2024-05-15 01:01:42.632387] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.550 [2024-05-15 01:01:42.771191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.808 [2024-05-15 01:01:42.865703] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.808 [2024-05-15 01:01:42.865943] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.808 [2024-05-15 01:01:42.866022] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.808 [2024-05-15 01:01:42.866089] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.808 [2024-05-15 01:01:42.866169] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.808 [2024-05-15 01:01:42.866256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.377 01:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:40.377 01:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:40.377 01:01:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:40.377 01:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:35:40.377 01:01:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:40.377 01:01:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:40.377 01:01:43 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.nM8lqbv8dG 00:35:40.377 01:01:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nM8lqbv8dG 00:35:40.377 01:01:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:40.635 [2024-05-15 01:01:43.855106] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:40.635 01:01:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:40.912 01:01:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:35:41.172 [2024-05-15 01:01:44.359192] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:41.172 [2024-05-15 01:01:44.359298] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:41.172 [2024-05-15 01:01:44.359483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.172 01:01:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:41.431 malloc0 00:35:41.431 01:01:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:41.688 01:01:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nM8lqbv8dG 00:35:41.947 [2024-05-15 01:01:45.194904] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:41.947 01:01:45 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:41.947 01:01:45 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=100114 00:35:41.947 01:01:45 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:41.947 01:01:45 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 100114 /var/tmp/bdevperf.sock 00:35:41.947 01:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 100114 ']' 00:35:41.947 01:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:41.947 01:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:41.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:41.947 01:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:41.947 01:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:41.947 01:01:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:42.206 [2024-05-15 01:01:45.259012] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:42.206 [2024-05-15 01:01:45.259134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100114 ] 00:35:42.206 [2024-05-15 01:01:45.406427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.464 [2024-05-15 01:01:45.508281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:43.037 01:01:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:43.038 01:01:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:43.038 01:01:46 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nM8lqbv8dG 00:35:43.302 [2024-05-15 01:01:46.451280] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:43.302 [2024-05-15 01:01:46.451399] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:43.302 TLSTESTn1 00:35:43.302 01:01:46 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:35:43.871 01:01:46 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:35:43.871 "subsystems": [ 00:35:43.871 { 00:35:43.871 "subsystem": "keyring", 00:35:43.871 "config": [] 00:35:43.871 }, 00:35:43.871 { 00:35:43.871 "subsystem": "iobuf", 00:35:43.871 "config": [ 00:35:43.871 { 00:35:43.871 "method": "iobuf_set_options", 00:35:43.871 "params": { 00:35:43.871 "large_bufsize": 135168, 00:35:43.871 "large_pool_count": 1024, 00:35:43.871 "small_bufsize": 8192, 00:35:43.871 "small_pool_count": 8192 00:35:43.871 } 00:35:43.871 } 00:35:43.871 ] 00:35:43.871 }, 00:35:43.871 { 00:35:43.871 "subsystem": "sock", 00:35:43.871 "config": [ 00:35:43.871 { 00:35:43.871 "method": "sock_impl_set_options", 00:35:43.871 "params": { 00:35:43.871 "enable_ktls": false, 00:35:43.871 "enable_placement_id": 0, 00:35:43.871 "enable_quickack": false, 00:35:43.871 "enable_recv_pipe": true, 00:35:43.871 "enable_zerocopy_send_client": false, 00:35:43.871 "enable_zerocopy_send_server": true, 00:35:43.871 "impl_name": "posix", 00:35:43.871 "recv_buf_size": 2097152, 00:35:43.871 "send_buf_size": 2097152, 00:35:43.871 "tls_version": 0, 00:35:43.871 "zerocopy_threshold": 0 00:35:43.871 } 00:35:43.871 }, 00:35:43.871 { 00:35:43.871 "method": "sock_impl_set_options", 00:35:43.871 "params": { 00:35:43.871 "enable_ktls": false, 00:35:43.871 "enable_placement_id": 0, 00:35:43.871 "enable_quickack": false, 00:35:43.871 "enable_recv_pipe": true, 00:35:43.871 "enable_zerocopy_send_client": false, 00:35:43.871 "enable_zerocopy_send_server": true, 00:35:43.871 "impl_name": "ssl", 00:35:43.871 "recv_buf_size": 4096, 00:35:43.871 "send_buf_size": 4096, 00:35:43.871 "tls_version": 0, 00:35:43.871 "zerocopy_threshold": 0 00:35:43.871 } 00:35:43.871 } 00:35:43.871 ] 00:35:43.871 }, 00:35:43.871 { 00:35:43.871 "subsystem": "vmd", 00:35:43.871 "config": [] 00:35:43.871 }, 00:35:43.871 { 00:35:43.871 "subsystem": "accel", 00:35:43.871 "config": [ 00:35:43.871 { 00:35:43.871 "method": "accel_set_options", 00:35:43.871 "params": { 00:35:43.871 "buf_count": 2048, 00:35:43.871 "large_cache_size": 16, 00:35:43.871 "sequence_count": 2048, 00:35:43.871 "small_cache_size": 128, 00:35:43.871 "task_count": 2048 00:35:43.871 } 00:35:43.871 } 00:35:43.871 ] 00:35:43.871 }, 00:35:43.871 { 00:35:43.871 "subsystem": "bdev", 00:35:43.871 "config": [ 00:35:43.871 { 00:35:43.871 "method": "bdev_set_options", 00:35:43.871 "params": { 00:35:43.871 "bdev_auto_examine": true, 00:35:43.871 "bdev_io_cache_size": 256, 00:35:43.871 "bdev_io_pool_size": 65535, 00:35:43.871 "iobuf_large_cache_size": 16, 00:35:43.871 "iobuf_small_cache_size": 128 00:35:43.871 } 00:35:43.871 }, 00:35:43.871 { 00:35:43.871 "method": "bdev_raid_set_options", 00:35:43.871 "params": { 00:35:43.871 "process_window_size_kb": 1024 00:35:43.871 } 00:35:43.871 }, 00:35:43.871 { 00:35:43.871 "method": "bdev_iscsi_set_options", 00:35:43.871 "params": { 00:35:43.871 "timeout_sec": 30 00:35:43.871 } 00:35:43.871 }, 00:35:43.871 { 00:35:43.871 "method": "bdev_nvme_set_options", 00:35:43.871 "params": { 00:35:43.871 "action_on_timeout": "none", 00:35:43.871 "allow_accel_sequence": false, 00:35:43.871 "arbitration_burst": 0, 00:35:43.871 "bdev_retry_count": 3, 00:35:43.871 "ctrlr_loss_timeout_sec": 0, 00:35:43.871 "delay_cmd_submit": true, 00:35:43.871 "dhchap_dhgroups": [ 00:35:43.871 "null", 00:35:43.871 "ffdhe2048", 00:35:43.871 "ffdhe3072", 00:35:43.871 "ffdhe4096", 00:35:43.871 "ffdhe6144", 00:35:43.871 "ffdhe8192" 00:35:43.871 ], 00:35:43.871 "dhchap_digests": [ 00:35:43.871 "sha256", 00:35:43.871 "sha384", 00:35:43.871 "sha512" 00:35:43.871 ], 00:35:43.871 "disable_auto_failback": false, 00:35:43.871 "fast_io_fail_timeout_sec": 0, 00:35:43.871 "generate_uuids": false, 00:35:43.871 "high_priority_weight": 0, 00:35:43.872 "io_path_stat": false, 00:35:43.872 "io_queue_requests": 0, 00:35:43.872 "keep_alive_timeout_ms": 10000, 00:35:43.872 "low_priority_weight": 0, 00:35:43.872 "medium_priority_weight": 0, 00:35:43.872 "nvme_adminq_poll_period_us": 10000, 00:35:43.872 "nvme_error_stat": false, 00:35:43.872 "nvme_ioq_poll_period_us": 0, 00:35:43.872 "rdma_cm_event_timeout_ms": 0, 00:35:43.872 "rdma_max_cq_size": 0, 00:35:43.872 "rdma_srq_size": 0, 00:35:43.872 "reconnect_delay_sec": 0, 00:35:43.872 "timeout_admin_us": 0, 00:35:43.872 "timeout_us": 0, 00:35:43.872 "transport_ack_timeout": 0, 00:35:43.872 "transport_retry_count": 4, 00:35:43.872 "transport_tos": 0 00:35:43.872 } 00:35:43.872 }, 00:35:43.872 { 00:35:43.872 "method": "bdev_nvme_set_hotplug", 00:35:43.872 "params": { 00:35:43.872 "enable": false, 00:35:43.872 "period_us": 100000 00:35:43.872 } 00:35:43.872 }, 00:35:43.872 { 00:35:43.872 "method": "bdev_malloc_create", 00:35:43.872 "params": { 00:35:43.872 "block_size": 4096, 00:35:43.872 "name": "malloc0", 00:35:43.872 "num_blocks": 8192, 00:35:43.872 "optimal_io_boundary": 0, 00:35:43.872 "physical_block_size": 4096, 00:35:43.872 "uuid": "a0aad1af-868e-4088-b13f-7cce10ff8952" 00:35:43.872 } 00:35:43.872 }, 00:35:43.872 { 00:35:43.872 "method": "bdev_wait_for_examine" 00:35:43.872 } 00:35:43.872 ] 00:35:43.872 }, 00:35:43.872 { 00:35:43.872 "subsystem": "nbd", 00:35:43.872 "config": [] 00:35:43.872 }, 00:35:43.872 { 00:35:43.872 "subsystem": "scheduler", 00:35:43.872 "config": [ 00:35:43.872 { 00:35:43.872 "method": "framework_set_scheduler", 00:35:43.872 "params": { 00:35:43.872 "name": "static" 00:35:43.872 } 00:35:43.872 } 00:35:43.872 ] 00:35:43.872 }, 00:35:43.872 { 00:35:43.872 "subsystem": "nvmf", 00:35:43.872 "config": [ 00:35:43.872 { 00:35:43.872 "method": "nvmf_set_config", 00:35:43.872 "params": { 00:35:43.872 "admin_cmd_passthru": { 00:35:43.872 "identify_ctrlr": false 00:35:43.872 }, 00:35:43.872 "discovery_filter": "match_any" 00:35:43.872 } 00:35:43.872 }, 00:35:43.872 { 00:35:43.872 "method": "nvmf_set_max_subsystems", 00:35:43.872 "params": { 00:35:43.872 "max_subsystems": 1024 00:35:43.872 } 00:35:43.872 }, 00:35:43.872 { 00:35:43.872 "method": "nvmf_set_crdt", 00:35:43.872 "params": { 00:35:43.872 "crdt1": 0, 00:35:43.872 "crdt2": 0, 00:35:43.872 "crdt3": 0 00:35:43.872 } 00:35:43.872 }, 00:35:43.872 { 00:35:43.872 "method": "nvmf_create_transport", 00:35:43.872 "params": { 00:35:43.872 "abort_timeout_sec": 1, 00:35:43.872 "ack_timeout": 0, 00:35:43.872 "buf_cache_size": 4294967295, 00:35:43.872 "c2h_success": false, 00:35:43.872 "data_wr_pool_size": 0, 00:35:43.872 "dif_insert_or_strip": false, 00:35:43.872 "in_capsule_data_size": 4096, 00:35:43.872 "io_unit_size": 131072, 00:35:43.872 "max_aq_depth": 128, 00:35:43.872 "max_io_qpairs_per_ctrlr": 127, 00:35:43.872 "max_io_size": 131072, 00:35:43.872 "max_queue_depth": 128, 00:35:43.872 "num_shared_buffers": 511, 00:35:43.872 "sock_priority": 0, 00:35:43.872 "trtype": "TCP", 00:35:43.872 "zcopy": false 00:35:43.872 } 00:35:43.872 }, 00:35:43.872 { 00:35:43.872 "method": "nvmf_create_subsystem", 00:35:43.872 "params": { 00:35:43.872 "allow_any_host": false, 00:35:43.872 "ana_reporting": false, 00:35:43.872 "max_cntlid": 65519, 00:35:43.872 "max_namespaces": 10, 00:35:43.872 "min_cntlid": 1, 00:35:43.872 "model_number": "SPDK bdev Controller", 00:35:43.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:43.872 "serial_number": "SPDK00000000000001" 00:35:43.872 } 00:35:43.872 }, 00:35:43.872 { 00:35:43.872 "method": "nvmf_subsystem_add_host", 00:35:43.872 "params": { 00:35:43.872 "host": "nqn.2016-06.io.spdk:host1", 00:35:43.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:43.872 "psk": "/tmp/tmp.nM8lqbv8dG" 00:35:43.872 } 00:35:43.872 }, 00:35:43.872 { 00:35:43.872 "method": "nvmf_subsystem_add_ns", 00:35:43.872 "params": { 00:35:43.872 "namespace": { 00:35:43.872 "bdev_name": "malloc0", 00:35:43.872 "nguid": "A0AAD1AF868E4088B13F7CCE10FF8952", 00:35:43.872 "no_auto_visible": false, 00:35:43.872 "nsid": 1, 00:35:43.872 "uuid": "a0aad1af-868e-4088-b13f-7cce10ff8952" 00:35:43.872 }, 00:35:43.872 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:35:43.872 } 00:35:43.872 }, 00:35:43.872 { 00:35:43.872 "method": "nvmf_subsystem_add_listener", 00:35:43.872 "params": { 00:35:43.872 "listen_address": { 00:35:43.872 "adrfam": "IPv4", 00:35:43.872 "traddr": "10.0.0.2", 00:35:43.872 "trsvcid": "4420", 00:35:43.872 "trtype": "TCP" 00:35:43.872 }, 00:35:43.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:43.872 "secure_channel": true 00:35:43.872 } 00:35:43.872 } 00:35:43.872 ] 00:35:43.872 } 00:35:43.872 ] 00:35:43.872 }' 00:35:43.872 01:01:46 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:35:44.131 01:01:47 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:35:44.131 "subsystems": [ 00:35:44.131 { 00:35:44.131 "subsystem": "keyring", 00:35:44.131 "config": [] 00:35:44.131 }, 00:35:44.131 { 00:35:44.131 "subsystem": "iobuf", 00:35:44.131 "config": [ 00:35:44.131 { 00:35:44.131 "method": "iobuf_set_options", 00:35:44.131 "params": { 00:35:44.131 "large_bufsize": 135168, 00:35:44.131 "large_pool_count": 1024, 00:35:44.131 "small_bufsize": 8192, 00:35:44.131 "small_pool_count": 8192 00:35:44.131 } 00:35:44.131 } 00:35:44.131 ] 00:35:44.131 }, 00:35:44.131 { 00:35:44.131 "subsystem": "sock", 00:35:44.131 "config": [ 00:35:44.131 { 00:35:44.131 "method": "sock_impl_set_options", 00:35:44.131 "params": { 00:35:44.132 "enable_ktls": false, 00:35:44.132 "enable_placement_id": 0, 00:35:44.132 "enable_quickack": false, 00:35:44.132 "enable_recv_pipe": true, 00:35:44.132 "enable_zerocopy_send_client": false, 00:35:44.132 "enable_zerocopy_send_server": true, 00:35:44.132 "impl_name": "posix", 00:35:44.132 "recv_buf_size": 2097152, 00:35:44.132 "send_buf_size": 2097152, 00:35:44.132 "tls_version": 0, 00:35:44.132 "zerocopy_threshold": 0 00:35:44.132 } 00:35:44.132 }, 00:35:44.132 { 00:35:44.132 "method": "sock_impl_set_options", 00:35:44.132 "params": { 00:35:44.132 "enable_ktls": false, 00:35:44.132 "enable_placement_id": 0, 00:35:44.132 "enable_quickack": false, 00:35:44.132 "enable_recv_pipe": true, 00:35:44.132 "enable_zerocopy_send_client": false, 00:35:44.132 "enable_zerocopy_send_server": true, 00:35:44.132 "impl_name": "ssl", 00:35:44.132 "recv_buf_size": 4096, 00:35:44.132 "send_buf_size": 4096, 00:35:44.132 "tls_version": 0, 00:35:44.132 "zerocopy_threshold": 0 00:35:44.132 } 00:35:44.132 } 00:35:44.132 ] 00:35:44.132 }, 00:35:44.132 { 00:35:44.132 "subsystem": "vmd", 00:35:44.132 "config": [] 00:35:44.132 }, 00:35:44.132 { 00:35:44.132 "subsystem": "accel", 00:35:44.132 "config": [ 00:35:44.132 { 00:35:44.132 "method": "accel_set_options", 00:35:44.132 "params": { 00:35:44.132 "buf_count": 2048, 00:35:44.132 "large_cache_size": 16, 00:35:44.132 "sequence_count": 2048, 00:35:44.132 "small_cache_size": 128, 00:35:44.132 "task_count": 2048 00:35:44.132 } 00:35:44.132 } 00:35:44.132 ] 00:35:44.132 }, 00:35:44.132 { 00:35:44.132 "subsystem": "bdev", 00:35:44.132 "config": [ 00:35:44.132 { 00:35:44.132 "method": "bdev_set_options", 00:35:44.132 "params": { 00:35:44.132 "bdev_auto_examine": true, 00:35:44.132 "bdev_io_cache_size": 256, 00:35:44.132 "bdev_io_pool_size": 65535, 00:35:44.132 "iobuf_large_cache_size": 16, 00:35:44.132 "iobuf_small_cache_size": 128 00:35:44.132 } 00:35:44.132 }, 00:35:44.132 { 00:35:44.132 "method": "bdev_raid_set_options", 00:35:44.132 "params": { 00:35:44.132 "process_window_size_kb": 1024 00:35:44.132 } 00:35:44.132 }, 00:35:44.132 { 00:35:44.132 "method": "bdev_iscsi_set_options", 00:35:44.132 "params": { 00:35:44.132 "timeout_sec": 30 00:35:44.132 } 00:35:44.132 }, 00:35:44.132 { 00:35:44.132 "method": "bdev_nvme_set_options", 00:35:44.132 "params": { 00:35:44.132 "action_on_timeout": "none", 00:35:44.132 "allow_accel_sequence": false, 00:35:44.132 "arbitration_burst": 0, 00:35:44.132 "bdev_retry_count": 3, 00:35:44.132 "ctrlr_loss_timeout_sec": 0, 00:35:44.132 "delay_cmd_submit": true, 00:35:44.132 "dhchap_dhgroups": [ 00:35:44.132 "null", 00:35:44.132 "ffdhe2048", 00:35:44.132 "ffdhe3072", 00:35:44.132 "ffdhe4096", 00:35:44.132 "ffdhe6144", 00:35:44.132 "ffdhe8192" 00:35:44.132 ], 00:35:44.132 "dhchap_digests": [ 00:35:44.132 "sha256", 00:35:44.132 "sha384", 00:35:44.132 "sha512" 00:35:44.132 ], 00:35:44.132 "disable_auto_failback": false, 00:35:44.132 "fast_io_fail_timeout_sec": 0, 00:35:44.132 "generate_uuids": false, 00:35:44.132 "high_priority_weight": 0, 00:35:44.132 "io_path_stat": false, 00:35:44.132 "io_queue_requests": 512, 00:35:44.132 "keep_alive_timeout_ms": 10000, 00:35:44.132 "low_priority_weight": 0, 00:35:44.132 "medium_priority_weight": 0, 00:35:44.132 "nvme_adminq_poll_period_us": 10000, 00:35:44.132 "nvme_error_stat": false, 00:35:44.132 "nvme_ioq_poll_period_us": 0, 00:35:44.132 "rdma_cm_event_timeout_ms": 0, 00:35:44.132 "rdma_max_cq_size": 0, 00:35:44.132 "rdma_srq_size": 0, 00:35:44.132 "reconnect_delay_sec": 0, 00:35:44.132 "timeout_admin_us": 0, 00:35:44.132 "timeout_us": 0, 00:35:44.132 "transport_ack_timeout": 0, 00:35:44.132 "transport_retry_count": 4, 00:35:44.132 "transport_tos": 0 00:35:44.132 } 00:35:44.132 }, 00:35:44.132 { 00:35:44.132 "method": "bdev_nvme_attach_controller", 00:35:44.132 "params": { 00:35:44.132 "adrfam": "IPv4", 00:35:44.132 "ctrlr_loss_timeout_sec": 0, 00:35:44.132 "ddgst": false, 00:35:44.132 "fast_io_fail_timeout_sec": 0, 00:35:44.132 "hdgst": false, 00:35:44.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:44.132 "name": "TLSTEST", 00:35:44.132 "prchk_guard": false, 00:35:44.132 "prchk_reftag": false, 00:35:44.132 "psk": "/tmp/tmp.nM8lqbv8dG", 00:35:44.132 "reconnect_delay_sec": 0, 00:35:44.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:44.132 "traddr": "10.0.0.2", 00:35:44.132 "trsvcid": "4420", 00:35:44.132 "trtype": "TCP" 00:35:44.132 } 00:35:44.132 }, 00:35:44.132 { 00:35:44.132 "method": "bdev_nvme_set_hotplug", 00:35:44.132 "params": { 00:35:44.132 "enable": false, 00:35:44.132 "period_us": 100000 00:35:44.132 } 00:35:44.132 }, 00:35:44.132 { 00:35:44.132 "method": "bdev_wait_for_examine" 00:35:44.132 } 00:35:44.132 ] 00:35:44.132 }, 00:35:44.132 { 00:35:44.132 "subsystem": "nbd", 00:35:44.132 "config": [] 00:35:44.132 } 00:35:44.132 ] 00:35:44.132 }' 00:35:44.132 01:01:47 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 100114 00:35:44.132 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 100114 ']' 00:35:44.132 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 100114 00:35:44.132 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:44.132 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:44.132 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 100114 00:35:44.132 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:35:44.132 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:35:44.132 killing process with pid 100114 00:35:44.132 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 100114' 00:35:44.132 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 100114 00:35:44.132 Received shutdown signal, test time was about 10.000000 seconds 00:35:44.132 00:35:44.132 Latency(us) 00:35:44.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.132 =================================================================================================================== 00:35:44.132 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:44.132 [2024-05-15 01:01:47.254212] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:44.132 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 100114 00:35:44.419 01:01:47 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 100013 00:35:44.419 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 100013 ']' 00:35:44.419 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 100013 00:35:44.419 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:44.419 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:44.419 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 100013 00:35:44.419 killing process with pid 100013 00:35:44.419 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:35:44.419 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:35:44.419 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 100013' 00:35:44.419 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 100013 00:35:44.419 [2024-05-15 01:01:47.492847] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:44.419 [2024-05-15 01:01:47.492897] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:44.419 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 100013 00:35:44.730 01:01:47 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:35:44.730 01:01:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:44.730 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:35:44.730 01:01:47 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:35:44.730 "subsystems": [ 00:35:44.730 { 00:35:44.730 "subsystem": "keyring", 00:35:44.731 "config": [] 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "subsystem": "iobuf", 00:35:44.731 "config": [ 00:35:44.731 { 00:35:44.731 "method": "iobuf_set_options", 00:35:44.731 "params": { 00:35:44.731 "large_bufsize": 135168, 00:35:44.731 "large_pool_count": 1024, 00:35:44.731 "small_bufsize": 8192, 00:35:44.731 "small_pool_count": 8192 00:35:44.731 } 00:35:44.731 } 00:35:44.731 ] 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "subsystem": "sock", 00:35:44.731 "config": [ 00:35:44.731 { 00:35:44.731 "method": "sock_impl_set_options", 00:35:44.731 "params": { 00:35:44.731 "enable_ktls": false, 00:35:44.731 "enable_placement_id": 0, 00:35:44.731 "enable_quickack": false, 00:35:44.731 "enable_recv_pipe": true, 00:35:44.731 "enable_zerocopy_send_client": false, 00:35:44.731 "enable_zerocopy_send_server": true, 00:35:44.731 "impl_name": "posix", 00:35:44.731 "recv_buf_size": 2097152, 00:35:44.731 "send_buf_size": 2097152, 00:35:44.731 "tls_version": 0, 00:35:44.731 "zerocopy_threshold": 0 00:35:44.731 } 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "method": "sock_impl_set_options", 00:35:44.731 "params": { 00:35:44.731 "enable_ktls": false, 00:35:44.731 "enable_placement_id": 0, 00:35:44.731 "enable_quickack": false, 00:35:44.731 "enable_recv_pipe": true, 00:35:44.731 "enable_zerocopy_send_client": false, 00:35:44.731 "enable_zerocopy_send_server": true, 00:35:44.731 "impl_name": "ssl", 00:35:44.731 "recv_buf_size": 4096, 00:35:44.731 "send_buf_size": 4096, 00:35:44.731 "tls_version": 0, 00:35:44.731 "zerocopy_threshold": 0 00:35:44.731 } 00:35:44.731 } 00:35:44.731 ] 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "subsystem": "vmd", 00:35:44.731 "config": [] 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "subsystem": "accel", 00:35:44.731 "config": [ 00:35:44.731 { 00:35:44.731 "method": "accel_set_options", 00:35:44.731 "params": { 00:35:44.731 "buf_count": 2048, 00:35:44.731 "large_cache_size": 16, 00:35:44.731 "sequence_count": 2048, 00:35:44.731 "small_cache_size": 128, 00:35:44.731 "task_count": 2048 00:35:44.731 } 00:35:44.731 } 00:35:44.731 ] 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "subsystem": "bdev", 00:35:44.731 "config": [ 00:35:44.731 { 00:35:44.731 "method": "bdev_set_options", 00:35:44.731 "params": { 00:35:44.731 "bdev_auto_examine": true, 00:35:44.731 "bdev_io_cache_size": 256, 00:35:44.731 "bdev_io_pool_size": 65535, 00:35:44.731 "iobuf_large_cache_size": 16, 00:35:44.731 "iobuf_small_cache_size": 128 00:35:44.731 } 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "method": "bdev_raid_set_options", 00:35:44.731 "params": { 00:35:44.731 "process_window_size_kb": 1024 00:35:44.731 } 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "method": "bdev_iscsi_set_options", 00:35:44.731 "params": { 00:35:44.731 "timeout_sec": 30 00:35:44.731 } 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "method": "bdev_nvme_set_options", 00:35:44.731 "params": { 00:35:44.731 "action_on_timeout": "none", 00:35:44.731 "allow_accel_sequence": false, 00:35:44.731 "arbitration_burst": 0, 00:35:44.731 "bdev_retry_count": 3, 00:35:44.731 "ctrlr_loss_timeout_sec": 0, 00:35:44.731 "delay_cmd_submit": true, 00:35:44.731 "dhchap_dhgroups": [ 00:35:44.731 "null", 00:35:44.731 "ffdhe2048", 00:35:44.731 "ffdhe3072", 00:35:44.731 "ffdhe4096", 00:35:44.731 "ffdhe6144", 00:35:44.731 "ffdhe8192" 00:35:44.731 ], 00:35:44.731 "dhchap_digests": [ 00:35:44.731 "sha256", 00:35:44.731 "sha384", 00:35:44.731 "sha512" 00:35:44.731 ], 00:35:44.731 "disable_auto_failback": false, 00:35:44.731 "fast_io_fail_timeout_sec": 0, 00:35:44.731 "generate_uuids": false, 00:35:44.731 "high_priority_weight": 0, 00:35:44.731 "io_path_stat": false, 00:35:44.731 "io_queue_requests": 0, 00:35:44.731 "keep_alive_timeout_ms": 10000, 00:35:44.731 "low_priority_weight": 0, 00:35:44.731 "medium_priority_weight": 0, 00:35:44.731 "nvme_adminq_poll_period_us": 10000, 00:35:44.731 "nvme_error_stat": false, 00:35:44.731 "nvme_ioq_poll_period_us": 0, 00:35:44.731 "rdma_cm_event_timeout_ms": 0, 00:35:44.731 "rdma_max_cq_size": 0, 00:35:44.731 "rdma_srq_size": 0, 00:35:44.731 "reconnect_delay_sec": 0, 00:35:44.731 "timeout_admin_us": 0, 00:35:44.731 "timeout_us": 0, 00:35:44.731 "transport_ack_timeout": 0, 00:35:44.731 "transport_retry_count": 4, 00:35:44.731 "transport_tos": 0 00:35:44.731 } 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "method": "bdev_nvme_set_hotplug", 00:35:44.731 "params": { 00:35:44.731 "enable": false, 00:35:44.731 "period_us": 100000 00:35:44.731 } 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "method": "bdev_malloc_create", 00:35:44.731 "params": { 00:35:44.731 "block_size": 4096, 00:35:44.731 "name": "malloc0", 00:35:44.731 "num_blocks": 8192, 00:35:44.731 "optimal_io_boundary": 0, 00:35:44.731 "physical_block_size": 4096, 00:35:44.731 "uuid": "a0aad1af-868e-4088-b13f-7cce10ff8952" 00:35:44.731 } 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "method": "bdev_wait_for_examine" 00:35:44.731 } 00:35:44.731 ] 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "subsystem": "nbd", 00:35:44.731 "config": [] 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "subsystem": "scheduler", 00:35:44.731 "config": [ 00:35:44.731 { 00:35:44.731 "method": "framework_set_scheduler", 00:35:44.731 "params": { 00:35:44.731 "name": "static" 00:35:44.731 } 00:35:44.731 } 00:35:44.731 ] 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "subsystem": "nvmf", 00:35:44.731 "config": [ 00:35:44.731 { 00:35:44.731 "method": "nvmf_set_config", 00:35:44.731 "params": { 00:35:44.731 "admin_cmd_passthru": { 00:35:44.731 "identify_ctrlr": false 00:35:44.731 }, 00:35:44.731 "discovery_filter": "match_any" 00:35:44.731 } 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "method": "nvmf_set_max_subsystems", 00:35:44.731 "params": { 00:35:44.731 "max_subsystems": 1024 00:35:44.731 } 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "method": "nvmf_set_crdt", 00:35:44.731 "params": { 00:35:44.731 "crdt1": 0, 00:35:44.731 "crdt2": 0, 00:35:44.731 "crdt3": 0 00:35:44.731 } 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "method": "nvmf_create_transport", 00:35:44.731 "params": { 00:35:44.731 "abort_timeout_sec": 1, 00:35:44.731 "ack_timeout": 0, 00:35:44.731 "buf_cache_size": 4294967295, 00:35:44.731 "c2h_success": false, 00:35:44.731 "data_wr_pool_size": 0, 00:35:44.731 "dif_insert_or_strip": false, 00:35:44.731 "in_capsule_data_size": 4096, 00:35:44.731 "io_unit_size": 131072, 00:35:44.731 "max_aq_depth": 128, 00:35:44.731 "max_io_qpairs_per_ctrlr": 127, 00:35:44.731 "max_io_size": 131072, 00:35:44.731 "max_queue_depth": 128, 00:35:44.731 "num_shared_buffers": 511, 00:35:44.731 "sock_priority": 0, 00:35:44.731 "trtype": "TCP", 00:35:44.731 "zcopy": false 00:35:44.731 } 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "method": "nvmf_create_subsystem", 00:35:44.731 "params": { 00:35:44.731 "allow_any_host": false, 00:35:44.731 "ana_reporting": false, 00:35:44.731 "max_cntlid": 65519, 00:35:44.731 "max_namespaces": 10, 00:35:44.731 "min_cntlid": 1, 00:35:44.731 "model_number": "SPDK bdev Controller", 00:35:44.731 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:44.731 "serial_number": "SPDK00000000000001" 00:35:44.731 } 00:35:44.731 }, 00:35:44.731 { 00:35:44.731 "method": "nvmf_subsystem_add_host", 00:35:44.731 "params": { 00:35:44.732 "host": "nqn.2016-06.io.spdk:host1", 00:35:44.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:44.732 "psk": "/tmp/tmp.nM8lqbv8dG" 00:35:44.732 } 00:35:44.732 }, 00:35:44.732 { 00:35:44.732 "method": "nvmf_subsystem_add_ns", 00:35:44.732 "params": { 00:35:44.732 "namespace": { 00:35:44.732 "bdev_name": "malloc0", 00:35:44.732 "nguid": "A0AAD1AF868E4088B13F7CCE10FF8952", 00:35:44.732 "no_auto_visible": false, 00:35:44.732 "nsid": 1, 00:35:44.732 "uuid": "a0aad1af-868e-4088-b13f-7cce10ff8952" 00:35:44.732 }, 00:35:44.732 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:35:44.732 } 00:35:44.732 }, 00:35:44.732 { 00:35:44.732 "method": "nvmf_subsystem_add_listener", 00:35:44.732 "params": { 00:35:44.732 "listen_address": { 00:35:44.732 "adrfam": "IPv4", 00:35:44.732 "traddr": "10.0.0.2", 00:35:44.732 "trsvcid": "4420", 00:35:44.732 "trtype": "TCP" 00:35:44.732 }, 00:35:44.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:44.732 "secure_channel": true 00:35:44.732 } 00:35:44.732 } 00:35:44.732 ] 00:35:44.732 } 00:35:44.732 ] 00:35:44.732 }' 00:35:44.732 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:44.732 01:01:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100193 00:35:44.732 01:01:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:35:44.732 01:01:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100193 00:35:44.732 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 100193 ']' 00:35:44.732 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.732 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:44.732 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.732 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:44.732 01:01:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:44.732 [2024-05-15 01:01:47.779273] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:44.732 [2024-05-15 01:01:47.779373] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.732 [2024-05-15 01:01:47.920959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.732 [2024-05-15 01:01:48.000557] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:44.732 [2024-05-15 01:01:48.000646] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:44.732 [2024-05-15 01:01:48.000659] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:44.732 [2024-05-15 01:01:48.000680] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:44.732 [2024-05-15 01:01:48.000688] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:44.732 [2024-05-15 01:01:48.000769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.991 [2024-05-15 01:01:48.228252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:44.991 [2024-05-15 01:01:48.244202] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:44.991 [2024-05-15 01:01:48.260129] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:44.991 [2024-05-15 01:01:48.260216] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:44.991 [2024-05-15 01:01:48.260393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.559 01:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:45.559 01:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:45.559 01:01:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:45.559 01:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:35:45.559 01:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:45.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:45.818 01:01:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.818 01:01:48 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=100237 00:35:45.818 01:01:48 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 100237 /var/tmp/bdevperf.sock 00:35:45.818 01:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 100237 ']' 00:35:45.818 01:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:45.818 01:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:45.818 01:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:45.818 01:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:45.818 01:01:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:45.818 01:01:48 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:35:45.818 01:01:48 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:35:45.818 "subsystems": [ 00:35:45.818 { 00:35:45.818 "subsystem": "keyring", 00:35:45.818 "config": [] 00:35:45.818 }, 00:35:45.818 { 00:35:45.818 "subsystem": "iobuf", 00:35:45.818 "config": [ 00:35:45.818 { 00:35:45.818 "method": "iobuf_set_options", 00:35:45.818 "params": { 00:35:45.818 "large_bufsize": 135168, 00:35:45.818 "large_pool_count": 1024, 00:35:45.818 "small_bufsize": 8192, 00:35:45.818 "small_pool_count": 8192 00:35:45.818 } 00:35:45.818 } 00:35:45.818 ] 00:35:45.818 }, 00:35:45.818 { 00:35:45.818 "subsystem": "sock", 00:35:45.818 "config": [ 00:35:45.818 { 00:35:45.818 "method": "sock_impl_set_options", 00:35:45.818 "params": { 00:35:45.818 "enable_ktls": false, 00:35:45.818 "enable_placement_id": 0, 00:35:45.818 "enable_quickack": false, 00:35:45.818 "enable_recv_pipe": true, 00:35:45.818 "enable_zerocopy_send_client": false, 00:35:45.818 "enable_zerocopy_send_server": true, 00:35:45.818 "impl_name": "posix", 00:35:45.818 "recv_buf_size": 2097152, 00:35:45.818 "send_buf_size": 2097152, 00:35:45.818 "tls_version": 0, 00:35:45.818 "zerocopy_threshold": 0 00:35:45.818 } 00:35:45.818 }, 00:35:45.818 { 00:35:45.818 "method": "sock_impl_set_options", 00:35:45.818 "params": { 00:35:45.818 "enable_ktls": false, 00:35:45.818 "enable_placement_id": 0, 00:35:45.818 "enable_quickack": false, 00:35:45.818 "enable_recv_pipe": true, 00:35:45.818 "enable_zerocopy_send_client": false, 00:35:45.818 "enable_zerocopy_send_server": true, 00:35:45.818 "impl_name": "ssl", 00:35:45.818 "recv_buf_size": 4096, 00:35:45.818 "send_buf_size": 4096, 00:35:45.818 "tls_version": 0, 00:35:45.818 "zerocopy_threshold": 0 00:35:45.818 } 00:35:45.818 } 00:35:45.818 ] 00:35:45.818 }, 00:35:45.818 { 00:35:45.818 "subsystem": "vmd", 00:35:45.818 "config": [] 00:35:45.818 }, 00:35:45.818 { 00:35:45.818 "subsystem": "accel", 00:35:45.818 "config": [ 00:35:45.818 { 00:35:45.818 "method": "accel_set_options", 00:35:45.818 "params": { 00:35:45.818 "buf_count": 2048, 00:35:45.818 "large_cache_size": 16, 00:35:45.818 "sequence_count": 2048, 00:35:45.818 "small_cache_size": 128, 00:35:45.818 "task_count": 2048 00:35:45.818 } 00:35:45.818 } 00:35:45.818 ] 00:35:45.818 }, 00:35:45.818 { 00:35:45.818 "subsystem": "bdev", 00:35:45.818 "config": [ 00:35:45.818 { 00:35:45.818 "method": "bdev_set_options", 00:35:45.818 "params": { 00:35:45.818 "bdev_auto_examine": true, 00:35:45.818 "bdev_io_cache_size": 256, 00:35:45.818 "bdev_io_pool_size": 65535, 00:35:45.818 "iobuf_large_cache_size": 16, 00:35:45.818 "iobuf_small_cache_size": 128 00:35:45.818 } 00:35:45.818 }, 00:35:45.818 { 00:35:45.818 "method": "bdev_raid_set_options", 00:35:45.818 "params": { 00:35:45.818 "process_window_size_kb": 1024 00:35:45.818 } 00:35:45.818 }, 00:35:45.818 { 00:35:45.818 "method": "bdev_iscsi_set_options", 00:35:45.818 "params": { 00:35:45.818 "timeout_sec": 30 00:35:45.818 } 00:35:45.818 }, 00:35:45.818 { 00:35:45.818 "method": "bdev_nvme_set_options", 00:35:45.818 "params": { 00:35:45.818 "action_on_timeout": "none", 00:35:45.818 "allow_accel_sequence": false, 00:35:45.818 "arbitration_burst": 0, 00:35:45.818 "bdev_retry_count": 3, 00:35:45.818 "ctrlr_loss_timeout_sec": 0, 00:35:45.818 "delay_cmd_submit": true, 00:35:45.818 "dhchap_dhgroups": [ 00:35:45.818 "null", 00:35:45.818 "ffdhe2048", 00:35:45.818 "ffdhe3072", 00:35:45.818 "ffdhe4096", 00:35:45.818 "ffdhe6144", 00:35:45.818 "ffdhe8192" 00:35:45.818 ], 00:35:45.818 "dhchap_digests": [ 00:35:45.818 "sha256", 00:35:45.818 "sha384", 00:35:45.818 "sha512" 00:35:45.818 ], 00:35:45.818 "disable_auto_failback": false, 00:35:45.818 "fast_io_fail_timeout_sec": 0, 00:35:45.818 "generate_uuids": false, 00:35:45.818 "high_priority_weight": 0, 00:35:45.818 "io_path_stat": false, 00:35:45.818 "io_queue_requests": 512, 00:35:45.818 "keep_alive_timeout_ms": 10000, 00:35:45.818 "low_priority_weight": 0, 00:35:45.818 "medium_priority_weight": 0, 00:35:45.818 "nvme_adminq_poll_period_us": 10000, 00:35:45.818 "nvme_error_stat": false, 00:35:45.818 "nvme_ioq_poll_period_us": 0, 00:35:45.818 "rdma_cm_event_timeout_ms": 0, 00:35:45.818 "rdma_max_cq_size": 0, 00:35:45.818 "rdma_srq_size": 0, 00:35:45.818 "reconnect_delay_sec": 0, 00:35:45.818 "timeout_admin_us": 0, 00:35:45.818 "timeout_us": 0, 00:35:45.818 "transport_ack_timeout": 0, 00:35:45.818 "transport_retry_count": 4, 00:35:45.818 "transport_tos": 0 00:35:45.818 } 00:35:45.818 }, 00:35:45.818 { 00:35:45.818 "method": "bdev_nvme_attach_controller", 00:35:45.818 "params": { 00:35:45.818 "adrfam": "IPv4", 00:35:45.818 "ctrlr_loss_timeout_sec": 0, 00:35:45.818 "ddgst": false, 00:35:45.818 "fast_io_fail_timeout_sec": 0, 00:35:45.818 "hdgst": false, 00:35:45.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:45.818 "name": "TLSTEST", 00:35:45.818 "prchk_guard": false, 00:35:45.819 "prchk_reftag": false, 00:35:45.819 "psk": "/tmp/tmp.nM8lqbv8dG", 00:35:45.819 "reconnect_delay_sec": 0, 00:35:45.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:45.819 "traddr": "10.0.0.2", 00:35:45.819 "trsvcid": "4420", 00:35:45.819 "trtype": "TCP" 00:35:45.819 } 00:35:45.819 }, 00:35:45.819 { 00:35:45.819 "method": "bdev_nvme_set_hotplug", 00:35:45.819 "params": { 00:35:45.819 "enable": false, 00:35:45.819 "period_us": 100000 00:35:45.819 } 00:35:45.819 }, 00:35:45.819 { 00:35:45.819 "method": "bdev_wait_for_examine" 00:35:45.819 } 00:35:45.819 ] 00:35:45.819 }, 00:35:45.819 { 00:35:45.819 "subsystem": "nbd", 00:35:45.819 "config": [] 00:35:45.819 } 00:35:45.819 ] 00:35:45.819 }' 00:35:45.819 [2024-05-15 01:01:48.908951] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:45.819 [2024-05-15 01:01:48.909055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100237 ] 00:35:45.819 [2024-05-15 01:01:49.053693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.077 [2024-05-15 01:01:49.153215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:46.077 [2024-05-15 01:01:49.314341] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:46.077 [2024-05-15 01:01:49.314476] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:46.644 01:01:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:46.644 01:01:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:46.644 01:01:49 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:35:46.903 Running I/O for 10 seconds... 00:35:57.001 00:35:57.001 Latency(us) 00:35:57.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.001 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:57.001 Verification LBA range: start 0x0 length 0x2000 00:35:57.001 TLSTESTn1 : 10.03 3906.74 15.26 0.00 0.00 32697.19 8400.52 21209.83 00:35:57.001 =================================================================================================================== 00:35:57.001 Total : 3906.74 15.26 0.00 0.00 32697.19 8400.52 21209.83 00:35:57.001 0 00:35:57.001 01:02:00 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:57.001 01:02:00 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 100237 00:35:57.001 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 100237 ']' 00:35:57.001 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 100237 00:35:57.001 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:57.001 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:57.001 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 100237 00:35:57.001 killing process with pid 100237 00:35:57.001 Received shutdown signal, test time was about 10.000000 seconds 00:35:57.001 00:35:57.001 Latency(us) 00:35:57.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.001 =================================================================================================================== 00:35:57.001 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:57.001 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:35:57.001 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:35:57.001 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 100237' 00:35:57.001 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 100237 00:35:57.001 [2024-05-15 01:02:00.090804] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:57.001 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 100237 00:35:57.260 01:02:00 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 100193 00:35:57.260 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 100193 ']' 00:35:57.260 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 100193 00:35:57.260 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:35:57.260 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:57.260 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 100193 00:35:57.260 killing process with pid 100193 00:35:57.260 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:35:57.260 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:35:57.260 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 100193' 00:35:57.260 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 100193 00:35:57.260 [2024-05-15 01:02:00.329621] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:57.260 [2024-05-15 01:02:00.329670] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:57.260 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 100193 00:35:57.520 01:02:00 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:35:57.520 01:02:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:57.520 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:35:57.520 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:57.520 01:02:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100379 00:35:57.520 01:02:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:57.520 01:02:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100379 00:35:57.520 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 100379 ']' 00:35:57.520 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:57.520 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:57.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:57.520 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:57.520 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:57.520 01:02:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:57.520 [2024-05-15 01:02:00.605166] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:57.520 [2024-05-15 01:02:00.605282] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:57.520 [2024-05-15 01:02:00.745486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.778 [2024-05-15 01:02:00.847077] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:57.778 [2024-05-15 01:02:00.847152] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:57.778 [2024-05-15 01:02:00.847174] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:57.778 [2024-05-15 01:02:00.847192] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:57.778 [2024-05-15 01:02:00.847208] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:57.779 [2024-05-15 01:02:00.847249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:58.345 01:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:58.345 01:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:35:58.345 01:02:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:58.345 01:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:35:58.345 01:02:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:58.345 01:02:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:58.345 01:02:01 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.nM8lqbv8dG 00:35:58.345 01:02:01 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nM8lqbv8dG 00:35:58.345 01:02:01 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:58.603 [2024-05-15 01:02:01.861574] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:58.603 01:02:01 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:58.882 01:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:35:59.142 [2024-05-15 01:02:02.381684] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:59.142 [2024-05-15 01:02:02.381798] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:59.142 [2024-05-15 01:02:02.381993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.142 01:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:59.402 malloc0 00:35:59.402 01:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:59.662 01:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nM8lqbv8dG 00:35:59.926 [2024-05-15 01:02:03.157571] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:59.926 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=100482 00:35:59.926 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:35:59.926 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:59.926 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 100482 /var/tmp/bdevperf.sock 00:35:59.926 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 100482 ']' 00:35:59.926 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:59.926 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:59.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:59.926 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:59.926 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:59.926 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:00.185 [2024-05-15 01:02:03.233373] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:00.185 [2024-05-15 01:02:03.233490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100482 ] 00:36:00.185 [2024-05-15 01:02:03.373714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:00.443 [2024-05-15 01:02:03.475255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.010 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:01.010 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:36:01.010 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nM8lqbv8dG 00:36:01.269 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:36:01.527 [2024-05-15 01:02:04.606588] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:01.527 nvme0n1 00:36:01.527 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:01.527 Running I/O for 1 seconds... 00:36:02.905 00:36:02.905 Latency(us) 00:36:02.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:02.905 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:02.905 Verification LBA range: start 0x0 length 0x2000 00:36:02.905 nvme0n1 : 1.02 3913.00 15.29 0.00 0.00 32374.09 7268.54 36223.53 00:36:02.905 =================================================================================================================== 00:36:02.905 Total : 3913.00 15.29 0.00 0.00 32374.09 7268.54 36223.53 00:36:02.905 0 00:36:02.905 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 100482 00:36:02.905 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 100482 ']' 00:36:02.905 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 100482 00:36:02.905 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:36:02.905 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:02.905 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 100482 00:36:02.905 killing process with pid 100482 00:36:02.905 Received shutdown signal, test time was about 1.000000 seconds 00:36:02.905 00:36:02.905 Latency(us) 00:36:02.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:02.905 =================================================================================================================== 00:36:02.905 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:02.905 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:02.905 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:02.905 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 100482' 00:36:02.905 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 100482 00:36:02.905 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 100482 00:36:02.905 01:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 100379 00:36:02.905 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 100379 ']' 00:36:02.905 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 100379 00:36:02.905 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:36:02.905 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:02.905 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 100379 00:36:02.905 killing process with pid 100379 00:36:02.905 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:36:02.905 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:36:02.905 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 100379' 00:36:02.905 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 100379 00:36:02.905 [2024-05-15 01:02:06.089496] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:02.905 [2024-05-15 01:02:06.089538] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:02.905 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 100379 00:36:03.165 01:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:36:03.165 01:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:03.165 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:03.165 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.165 01:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100557 00:36:03.165 01:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:03.165 01:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100557 00:36:03.165 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 100557 ']' 00:36:03.165 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.165 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:03.165 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.165 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:03.165 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:03.165 [2024-05-15 01:02:06.368137] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:03.165 [2024-05-15 01:02:06.368431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.423 [2024-05-15 01:02:06.509002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:03.423 [2024-05-15 01:02:06.596856] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.423 [2024-05-15 01:02:06.596906] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.423 [2024-05-15 01:02:06.596918] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.423 [2024-05-15 01:02:06.596927] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.423 [2024-05-15 01:02:06.596934] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.423 [2024-05-15 01:02:06.596963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:04.359 [2024-05-15 01:02:07.376288] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:04.359 malloc0 00:36:04.359 [2024-05-15 01:02:07.407916] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:04.359 [2024-05-15 01:02:07.408044] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:04.359 [2024-05-15 01:02:07.408222] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=100607 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 100607 /var/tmp/bdevperf.sock 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 100607 ']' 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:04.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:04.359 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:04.359 [2024-05-15 01:02:07.485113] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:04.359 [2024-05-15 01:02:07.485217] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100607 ] 00:36:04.359 [2024-05-15 01:02:07.621513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:04.619 [2024-05-15 01:02:07.712966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.555 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:05.555 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:36:05.555 01:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nM8lqbv8dG 00:36:05.555 01:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:36:05.813 [2024-05-15 01:02:08.959555] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:05.813 nvme0n1 00:36:05.813 01:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:06.071 Running I/O for 1 seconds... 00:36:07.032 00:36:07.032 Latency(us) 00:36:07.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:07.032 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:07.032 Verification LBA range: start 0x0 length 0x2000 00:36:07.032 nvme0n1 : 1.03 3963.70 15.48 0.00 0.00 31888.17 8221.79 21209.83 00:36:07.032 =================================================================================================================== 00:36:07.032 Total : 3963.70 15.48 0.00 0.00 31888.17 8221.79 21209.83 00:36:07.032 0 00:36:07.032 01:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:36:07.032 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.032 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:07.306 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.306 01:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:36:07.306 "subsystems": [ 00:36:07.306 { 00:36:07.306 "subsystem": "keyring", 00:36:07.306 "config": [ 00:36:07.306 { 00:36:07.306 "method": "keyring_file_add_key", 00:36:07.306 "params": { 00:36:07.306 "name": "key0", 00:36:07.306 "path": "/tmp/tmp.nM8lqbv8dG" 00:36:07.306 } 00:36:07.306 } 00:36:07.306 ] 00:36:07.306 }, 00:36:07.306 { 00:36:07.306 "subsystem": "iobuf", 00:36:07.306 "config": [ 00:36:07.306 { 00:36:07.306 "method": "iobuf_set_options", 00:36:07.306 "params": { 00:36:07.306 "large_bufsize": 135168, 00:36:07.306 "large_pool_count": 1024, 00:36:07.306 "small_bufsize": 8192, 00:36:07.306 "small_pool_count": 8192 00:36:07.306 } 00:36:07.306 } 00:36:07.306 ] 00:36:07.306 }, 00:36:07.306 { 00:36:07.306 "subsystem": "sock", 00:36:07.306 "config": [ 00:36:07.306 { 00:36:07.306 "method": "sock_impl_set_options", 00:36:07.306 "params": { 00:36:07.306 "enable_ktls": false, 00:36:07.306 "enable_placement_id": 0, 00:36:07.306 "enable_quickack": false, 00:36:07.306 "enable_recv_pipe": true, 00:36:07.306 "enable_zerocopy_send_client": false, 00:36:07.306 "enable_zerocopy_send_server": true, 00:36:07.306 "impl_name": "posix", 00:36:07.306 "recv_buf_size": 2097152, 00:36:07.306 "send_buf_size": 2097152, 00:36:07.306 "tls_version": 0, 00:36:07.306 "zerocopy_threshold": 0 00:36:07.306 } 00:36:07.306 }, 00:36:07.306 { 00:36:07.306 "method": "sock_impl_set_options", 00:36:07.306 "params": { 00:36:07.306 "enable_ktls": false, 00:36:07.306 "enable_placement_id": 0, 00:36:07.306 "enable_quickack": false, 00:36:07.306 "enable_recv_pipe": true, 00:36:07.306 "enable_zerocopy_send_client": false, 00:36:07.306 "enable_zerocopy_send_server": true, 00:36:07.306 "impl_name": "ssl", 00:36:07.306 "recv_buf_size": 4096, 00:36:07.306 "send_buf_size": 4096, 00:36:07.306 "tls_version": 0, 00:36:07.306 "zerocopy_threshold": 0 00:36:07.306 } 00:36:07.306 } 00:36:07.306 ] 00:36:07.306 }, 00:36:07.306 { 00:36:07.306 "subsystem": "vmd", 00:36:07.306 "config": [] 00:36:07.306 }, 00:36:07.306 { 00:36:07.306 "subsystem": "accel", 00:36:07.306 "config": [ 00:36:07.306 { 00:36:07.306 "method": "accel_set_options", 00:36:07.306 "params": { 00:36:07.306 "buf_count": 2048, 00:36:07.306 "large_cache_size": 16, 00:36:07.306 "sequence_count": 2048, 00:36:07.306 "small_cache_size": 128, 00:36:07.306 "task_count": 2048 00:36:07.306 } 00:36:07.306 } 00:36:07.306 ] 00:36:07.306 }, 00:36:07.306 { 00:36:07.306 "subsystem": "bdev", 00:36:07.306 "config": [ 00:36:07.306 { 00:36:07.306 "method": "bdev_set_options", 00:36:07.306 "params": { 00:36:07.306 "bdev_auto_examine": true, 00:36:07.306 "bdev_io_cache_size": 256, 00:36:07.306 "bdev_io_pool_size": 65535, 00:36:07.306 "iobuf_large_cache_size": 16, 00:36:07.306 "iobuf_small_cache_size": 128 00:36:07.306 } 00:36:07.306 }, 00:36:07.306 { 00:36:07.306 "method": "bdev_raid_set_options", 00:36:07.306 "params": { 00:36:07.306 "process_window_size_kb": 1024 00:36:07.306 } 00:36:07.306 }, 00:36:07.306 { 00:36:07.306 "method": "bdev_iscsi_set_options", 00:36:07.306 "params": { 00:36:07.306 "timeout_sec": 30 00:36:07.306 } 00:36:07.306 }, 00:36:07.306 { 00:36:07.306 "method": "bdev_nvme_set_options", 00:36:07.306 "params": { 00:36:07.306 "action_on_timeout": "none", 00:36:07.306 "allow_accel_sequence": false, 00:36:07.306 "arbitration_burst": 0, 00:36:07.306 "bdev_retry_count": 3, 00:36:07.306 "ctrlr_loss_timeout_sec": 0, 00:36:07.306 "delay_cmd_submit": true, 00:36:07.306 "dhchap_dhgroups": [ 00:36:07.306 "null", 00:36:07.306 "ffdhe2048", 00:36:07.306 "ffdhe3072", 00:36:07.306 "ffdhe4096", 00:36:07.306 "ffdhe6144", 00:36:07.306 "ffdhe8192" 00:36:07.306 ], 00:36:07.306 "dhchap_digests": [ 00:36:07.306 "sha256", 00:36:07.306 "sha384", 00:36:07.306 "sha512" 00:36:07.306 ], 00:36:07.306 "disable_auto_failback": false, 00:36:07.306 "fast_io_fail_timeout_sec": 0, 00:36:07.306 "generate_uuids": false, 00:36:07.306 "high_priority_weight": 0, 00:36:07.306 "io_path_stat": false, 00:36:07.306 "io_queue_requests": 0, 00:36:07.306 "keep_alive_timeout_ms": 10000, 00:36:07.306 "low_priority_weight": 0, 00:36:07.306 "medium_priority_weight": 0, 00:36:07.306 "nvme_adminq_poll_period_us": 10000, 00:36:07.306 "nvme_error_stat": false, 00:36:07.306 "nvme_ioq_poll_period_us": 0, 00:36:07.306 "rdma_cm_event_timeout_ms": 0, 00:36:07.306 "rdma_max_cq_size": 0, 00:36:07.306 "rdma_srq_size": 0, 00:36:07.306 "reconnect_delay_sec": 0, 00:36:07.306 "timeout_admin_us": 0, 00:36:07.306 "timeout_us": 0, 00:36:07.306 "transport_ack_timeout": 0, 00:36:07.306 "transport_retry_count": 4, 00:36:07.306 "transport_tos": 0 00:36:07.306 } 00:36:07.306 }, 00:36:07.306 { 00:36:07.306 "method": "bdev_nvme_set_hotplug", 00:36:07.306 "params": { 00:36:07.307 "enable": false, 00:36:07.307 "period_us": 100000 00:36:07.307 } 00:36:07.307 }, 00:36:07.307 { 00:36:07.307 "method": "bdev_malloc_create", 00:36:07.307 "params": { 00:36:07.307 "block_size": 4096, 00:36:07.307 "name": "malloc0", 00:36:07.307 "num_blocks": 8192, 00:36:07.307 "optimal_io_boundary": 0, 00:36:07.307 "physical_block_size": 4096, 00:36:07.307 "uuid": "1b18103b-50ae-4b21-b955-59bdccff82d6" 00:36:07.307 } 00:36:07.307 }, 00:36:07.307 { 00:36:07.307 "method": "bdev_wait_for_examine" 00:36:07.307 } 00:36:07.307 ] 00:36:07.307 }, 00:36:07.307 { 00:36:07.307 "subsystem": "nbd", 00:36:07.307 "config": [] 00:36:07.307 }, 00:36:07.307 { 00:36:07.307 "subsystem": "scheduler", 00:36:07.307 "config": [ 00:36:07.307 { 00:36:07.307 "method": "framework_set_scheduler", 00:36:07.307 "params": { 00:36:07.307 "name": "static" 00:36:07.307 } 00:36:07.307 } 00:36:07.307 ] 00:36:07.307 }, 00:36:07.307 { 00:36:07.307 "subsystem": "nvmf", 00:36:07.307 "config": [ 00:36:07.307 { 00:36:07.307 "method": "nvmf_set_config", 00:36:07.307 "params": { 00:36:07.307 "admin_cmd_passthru": { 00:36:07.307 "identify_ctrlr": false 00:36:07.307 }, 00:36:07.307 "discovery_filter": "match_any" 00:36:07.307 } 00:36:07.307 }, 00:36:07.307 { 00:36:07.307 "method": "nvmf_set_max_subsystems", 00:36:07.307 "params": { 00:36:07.307 "max_subsystems": 1024 00:36:07.307 } 00:36:07.307 }, 00:36:07.307 { 00:36:07.307 "method": "nvmf_set_crdt", 00:36:07.307 "params": { 00:36:07.307 "crdt1": 0, 00:36:07.307 "crdt2": 0, 00:36:07.307 "crdt3": 0 00:36:07.307 } 00:36:07.307 }, 00:36:07.307 { 00:36:07.307 "method": "nvmf_create_transport", 00:36:07.307 "params": { 00:36:07.307 "abort_timeout_sec": 1, 00:36:07.307 "ack_timeout": 0, 00:36:07.307 "buf_cache_size": 4294967295, 00:36:07.307 "c2h_success": false, 00:36:07.307 "data_wr_pool_size": 0, 00:36:07.307 "dif_insert_or_strip": false, 00:36:07.307 "in_capsule_data_size": 4096, 00:36:07.307 "io_unit_size": 131072, 00:36:07.307 "max_aq_depth": 128, 00:36:07.307 "max_io_qpairs_per_ctrlr": 127, 00:36:07.307 "max_io_size": 131072, 00:36:07.307 "max_queue_depth": 128, 00:36:07.307 "num_shared_buffers": 511, 00:36:07.307 "sock_priority": 0, 00:36:07.307 "trtype": "TCP", 00:36:07.307 "zcopy": false 00:36:07.307 } 00:36:07.307 }, 00:36:07.307 { 00:36:07.307 "method": "nvmf_create_subsystem", 00:36:07.307 "params": { 00:36:07.307 "allow_any_host": false, 00:36:07.307 "ana_reporting": false, 00:36:07.307 "max_cntlid": 65519, 00:36:07.307 "max_namespaces": 32, 00:36:07.307 "min_cntlid": 1, 00:36:07.307 "model_number": "SPDK bdev Controller", 00:36:07.307 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:07.307 "serial_number": "00000000000000000000" 00:36:07.307 } 00:36:07.307 }, 00:36:07.307 { 00:36:07.307 "method": "nvmf_subsystem_add_host", 00:36:07.307 "params": { 00:36:07.307 "host": "nqn.2016-06.io.spdk:host1", 00:36:07.307 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:07.307 "psk": "key0" 00:36:07.307 } 00:36:07.307 }, 00:36:07.307 { 00:36:07.307 "method": "nvmf_subsystem_add_ns", 00:36:07.307 "params": { 00:36:07.307 "namespace": { 00:36:07.307 "bdev_name": "malloc0", 00:36:07.307 "nguid": "1B18103B50AE4B21B95559BDCCFF82D6", 00:36:07.307 "no_auto_visible": false, 00:36:07.307 "nsid": 1, 00:36:07.307 "uuid": "1b18103b-50ae-4b21-b955-59bdccff82d6" 00:36:07.307 }, 00:36:07.307 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:36:07.307 } 00:36:07.307 }, 00:36:07.307 { 00:36:07.307 "method": "nvmf_subsystem_add_listener", 00:36:07.307 "params": { 00:36:07.307 "listen_address": { 00:36:07.307 "adrfam": "IPv4", 00:36:07.307 "traddr": "10.0.0.2", 00:36:07.307 "trsvcid": "4420", 00:36:07.307 "trtype": "TCP" 00:36:07.307 }, 00:36:07.307 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:07.307 "secure_channel": true 00:36:07.307 } 00:36:07.307 } 00:36:07.307 ] 00:36:07.307 } 00:36:07.307 ] 00:36:07.307 }' 00:36:07.307 01:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:36:07.566 01:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:36:07.566 "subsystems": [ 00:36:07.566 { 00:36:07.566 "subsystem": "keyring", 00:36:07.566 "config": [ 00:36:07.566 { 00:36:07.566 "method": "keyring_file_add_key", 00:36:07.566 "params": { 00:36:07.566 "name": "key0", 00:36:07.566 "path": "/tmp/tmp.nM8lqbv8dG" 00:36:07.566 } 00:36:07.566 } 00:36:07.566 ] 00:36:07.566 }, 00:36:07.566 { 00:36:07.566 "subsystem": "iobuf", 00:36:07.566 "config": [ 00:36:07.566 { 00:36:07.566 "method": "iobuf_set_options", 00:36:07.566 "params": { 00:36:07.566 "large_bufsize": 135168, 00:36:07.566 "large_pool_count": 1024, 00:36:07.566 "small_bufsize": 8192, 00:36:07.566 "small_pool_count": 8192 00:36:07.566 } 00:36:07.566 } 00:36:07.566 ] 00:36:07.566 }, 00:36:07.566 { 00:36:07.566 "subsystem": "sock", 00:36:07.566 "config": [ 00:36:07.566 { 00:36:07.566 "method": "sock_impl_set_options", 00:36:07.566 "params": { 00:36:07.566 "enable_ktls": false, 00:36:07.566 "enable_placement_id": 0, 00:36:07.566 "enable_quickack": false, 00:36:07.566 "enable_recv_pipe": true, 00:36:07.566 "enable_zerocopy_send_client": false, 00:36:07.566 "enable_zerocopy_send_server": true, 00:36:07.566 "impl_name": "posix", 00:36:07.566 "recv_buf_size": 2097152, 00:36:07.566 "send_buf_size": 2097152, 00:36:07.566 "tls_version": 0, 00:36:07.567 "zerocopy_threshold": 0 00:36:07.567 } 00:36:07.567 }, 00:36:07.567 { 00:36:07.567 "method": "sock_impl_set_options", 00:36:07.567 "params": { 00:36:07.567 "enable_ktls": false, 00:36:07.567 "enable_placement_id": 0, 00:36:07.567 "enable_quickack": false, 00:36:07.567 "enable_recv_pipe": true, 00:36:07.567 "enable_zerocopy_send_client": false, 00:36:07.567 "enable_zerocopy_send_server": true, 00:36:07.567 "impl_name": "ssl", 00:36:07.567 "recv_buf_size": 4096, 00:36:07.567 "send_buf_size": 4096, 00:36:07.567 "tls_version": 0, 00:36:07.567 "zerocopy_threshold": 0 00:36:07.567 } 00:36:07.567 } 00:36:07.567 ] 00:36:07.567 }, 00:36:07.567 { 00:36:07.567 "subsystem": "vmd", 00:36:07.567 "config": [] 00:36:07.567 }, 00:36:07.567 { 00:36:07.567 "subsystem": "accel", 00:36:07.567 "config": [ 00:36:07.567 { 00:36:07.567 "method": "accel_set_options", 00:36:07.567 "params": { 00:36:07.567 "buf_count": 2048, 00:36:07.567 "large_cache_size": 16, 00:36:07.567 "sequence_count": 2048, 00:36:07.567 "small_cache_size": 128, 00:36:07.567 "task_count": 2048 00:36:07.567 } 00:36:07.567 } 00:36:07.567 ] 00:36:07.567 }, 00:36:07.567 { 00:36:07.567 "subsystem": "bdev", 00:36:07.567 "config": [ 00:36:07.567 { 00:36:07.567 "method": "bdev_set_options", 00:36:07.567 "params": { 00:36:07.567 "bdev_auto_examine": true, 00:36:07.567 "bdev_io_cache_size": 256, 00:36:07.567 "bdev_io_pool_size": 65535, 00:36:07.567 "iobuf_large_cache_size": 16, 00:36:07.567 "iobuf_small_cache_size": 128 00:36:07.567 } 00:36:07.567 }, 00:36:07.567 { 00:36:07.567 "method": "bdev_raid_set_options", 00:36:07.567 "params": { 00:36:07.567 "process_window_size_kb": 1024 00:36:07.567 } 00:36:07.567 }, 00:36:07.567 { 00:36:07.567 "method": "bdev_iscsi_set_options", 00:36:07.567 "params": { 00:36:07.567 "timeout_sec": 30 00:36:07.567 } 00:36:07.567 }, 00:36:07.567 { 00:36:07.567 "method": "bdev_nvme_set_options", 00:36:07.567 "params": { 00:36:07.567 "action_on_timeout": "none", 00:36:07.567 "allow_accel_sequence": false, 00:36:07.567 "arbitration_burst": 0, 00:36:07.567 "bdev_retry_count": 3, 00:36:07.567 "ctrlr_loss_timeout_sec": 0, 00:36:07.567 "delay_cmd_submit": true, 00:36:07.567 "dhchap_dhgroups": [ 00:36:07.567 "null", 00:36:07.567 "ffdhe2048", 00:36:07.567 "ffdhe3072", 00:36:07.567 "ffdhe4096", 00:36:07.567 "ffdhe6144", 00:36:07.567 "ffdhe8192" 00:36:07.567 ], 00:36:07.567 "dhchap_digests": [ 00:36:07.567 "sha256", 00:36:07.567 "sha384", 00:36:07.567 "sha512" 00:36:07.567 ], 00:36:07.567 "disable_auto_failback": false, 00:36:07.567 "fast_io_fail_timeout_sec": 0, 00:36:07.567 "generate_uuids": false, 00:36:07.567 "high_priority_weight": 0, 00:36:07.567 "io_path_stat": false, 00:36:07.567 "io_queue_requests": 512, 00:36:07.567 "keep_alive_timeout_ms": 10000, 00:36:07.567 "low_priority_weight": 0, 00:36:07.567 "medium_priority_weight": 0, 00:36:07.567 "nvme_adminq_poll_period_us": 10000, 00:36:07.567 "nvme_error_stat": false, 00:36:07.567 "nvme_ioq_poll_period_us": 0, 00:36:07.567 "rdma_cm_event_timeout_ms": 0, 00:36:07.567 "rdma_max_cq_size": 0, 00:36:07.567 "rdma_srq_size": 0, 00:36:07.567 "reconnect_delay_sec": 0, 00:36:07.567 "timeout_admin_us": 0, 00:36:07.567 "timeout_us": 0, 00:36:07.567 "transport_ack_timeout": 0, 00:36:07.567 "transport_retry_count": 4, 00:36:07.567 "transport_tos": 0 00:36:07.567 } 00:36:07.567 }, 00:36:07.567 { 00:36:07.567 "method": "bdev_nvme_attach_controller", 00:36:07.567 "params": { 00:36:07.567 "adrfam": "IPv4", 00:36:07.567 "ctrlr_loss_timeout_sec": 0, 00:36:07.567 "ddgst": false, 00:36:07.567 "fast_io_fail_timeout_sec": 0, 00:36:07.567 "hdgst": false, 00:36:07.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:07.567 "name": "nvme0", 00:36:07.567 "prchk_guard": false, 00:36:07.567 "prchk_reftag": false, 00:36:07.567 "psk": "key0", 00:36:07.567 "reconnect_delay_sec": 0, 00:36:07.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:07.567 "traddr": "10.0.0.2", 00:36:07.567 "trsvcid": "4420", 00:36:07.567 "trtype": "TCP" 00:36:07.567 } 00:36:07.567 }, 00:36:07.567 { 00:36:07.567 "method": "bdev_nvme_set_hotplug", 00:36:07.567 "params": { 00:36:07.567 "enable": false, 00:36:07.567 "period_us": 100000 00:36:07.567 } 00:36:07.567 }, 00:36:07.567 { 00:36:07.567 "method": "bdev_enable_histogram", 00:36:07.567 "params": { 00:36:07.567 "enable": true, 00:36:07.567 "name": "nvme0n1" 00:36:07.567 } 00:36:07.567 }, 00:36:07.567 { 00:36:07.567 "method": "bdev_wait_for_examine" 00:36:07.567 } 00:36:07.567 ] 00:36:07.567 }, 00:36:07.567 { 00:36:07.567 "subsystem": "nbd", 00:36:07.567 "config": [] 00:36:07.567 } 00:36:07.567 ] 00:36:07.567 }' 00:36:07.567 01:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 100607 00:36:07.567 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 100607 ']' 00:36:07.567 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 100607 00:36:07.567 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:36:07.567 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:07.567 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 100607 00:36:07.567 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:07.567 killing process with pid 100607 00:36:07.567 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:07.567 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 100607' 00:36:07.567 Received shutdown signal, test time was about 1.000000 seconds 00:36:07.567 00:36:07.567 Latency(us) 00:36:07.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:07.567 =================================================================================================================== 00:36:07.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:07.567 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 100607 00:36:07.567 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 100607 00:36:07.825 01:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 100557 00:36:07.825 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 100557 ']' 00:36:07.825 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 100557 00:36:07.825 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:36:07.825 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:07.825 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 100557 00:36:07.825 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:36:07.825 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:36:07.825 killing process with pid 100557 00:36:07.825 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 100557' 00:36:07.825 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 100557 00:36:07.825 [2024-05-15 01:02:10.894673] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:07.825 01:02:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 100557 00:36:07.825 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:36:07.825 01:02:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:07.825 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:07.825 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:08.082 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:36:08.083 "subsystems": [ 00:36:08.083 { 00:36:08.083 "subsystem": "keyring", 00:36:08.083 "config": [ 00:36:08.083 { 00:36:08.083 "method": "keyring_file_add_key", 00:36:08.083 "params": { 00:36:08.083 "name": "key0", 00:36:08.083 "path": "/tmp/tmp.nM8lqbv8dG" 00:36:08.083 } 00:36:08.083 } 00:36:08.083 ] 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "subsystem": "iobuf", 00:36:08.083 "config": [ 00:36:08.083 { 00:36:08.083 "method": "iobuf_set_options", 00:36:08.083 "params": { 00:36:08.083 "large_bufsize": 135168, 00:36:08.083 "large_pool_count": 1024, 00:36:08.083 "small_bufsize": 8192, 00:36:08.083 "small_pool_count": 8192 00:36:08.083 } 00:36:08.083 } 00:36:08.083 ] 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "subsystem": "sock", 00:36:08.083 "config": [ 00:36:08.083 { 00:36:08.083 "method": "sock_impl_set_options", 00:36:08.083 "params": { 00:36:08.083 "enable_ktls": false, 00:36:08.083 "enable_placement_id": 0, 00:36:08.083 "enable_quickack": false, 00:36:08.083 "enable_recv_pipe": true, 00:36:08.083 "enable_zerocopy_send_client": false, 00:36:08.083 "enable_zerocopy_send_server": true, 00:36:08.083 "impl_name": "posix", 00:36:08.083 "recv_buf_size": 2097152, 00:36:08.083 "send_buf_size": 2097152, 00:36:08.083 "tls_version": 0, 00:36:08.083 "zerocopy_threshold": 0 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "sock_impl_set_options", 00:36:08.083 "params": { 00:36:08.083 "enable_ktls": false, 00:36:08.083 "enable_placement_id": 0, 00:36:08.083 "enable_quickack": false, 00:36:08.083 "enable_recv_pipe": true, 00:36:08.083 "enable_zerocopy_send_client": false, 00:36:08.083 "enable_zerocopy_send_server": true, 00:36:08.083 "impl_name": "ssl", 00:36:08.083 "recv_buf_size": 4096, 00:36:08.083 "send_buf_size": 4096, 00:36:08.083 "tls_version": 0, 00:36:08.083 "zerocopy_threshold": 0 00:36:08.083 } 00:36:08.083 } 00:36:08.083 ] 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "subsystem": "vmd", 00:36:08.083 "config": [] 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "subsystem": "accel", 00:36:08.083 "config": [ 00:36:08.083 { 00:36:08.083 "method": "accel_set_options", 00:36:08.083 "params": { 00:36:08.083 "buf_count": 2048, 00:36:08.083 "large_cache_size": 16, 00:36:08.083 "sequence_count": 2048, 00:36:08.083 "small_cache_size": 128, 00:36:08.083 "task_count": 2048 00:36:08.083 } 00:36:08.083 } 00:36:08.083 ] 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "subsystem": "bdev", 00:36:08.083 "config": [ 00:36:08.083 { 00:36:08.083 "method": "bdev_set_options", 00:36:08.083 "params": { 00:36:08.083 "bdev_auto_examine": true, 00:36:08.083 "bdev_io_cache_size": 256, 00:36:08.083 "bdev_io_pool_size": 65535, 00:36:08.083 "iobuf_large_cache_size": 16, 00:36:08.083 "iobuf_small_cache_size": 128 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "bdev_raid_set_options", 00:36:08.083 "params": { 00:36:08.083 "process_window_size_kb": 1024 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "bdev_iscsi_set_options", 00:36:08.083 "params": { 00:36:08.083 "timeout_sec": 30 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "bdev_nvme_set_options", 00:36:08.083 "params": { 00:36:08.083 "action_on_timeout": "none", 00:36:08.083 "allow_accel_sequence": false, 00:36:08.083 "arbitration_burst": 0, 00:36:08.083 "bdev_retry_count": 3, 00:36:08.083 "ctrlr_loss_timeout_sec": 0, 00:36:08.083 "delay_cmd_submit": true, 00:36:08.083 "dhchap_dhgroups": [ 00:36:08.083 "null", 00:36:08.083 "ffdhe2048", 00:36:08.083 "ffdhe3072", 00:36:08.083 "ffdhe4096", 00:36:08.083 "ffdhe6144", 00:36:08.083 "ffdhe8192" 00:36:08.083 ], 00:36:08.083 "dhchap_digests": [ 00:36:08.083 "sha256", 00:36:08.083 "sha384", 00:36:08.083 "sha512" 00:36:08.083 ], 00:36:08.083 "disable_auto_failback": false, 00:36:08.083 "fast_io_fail_timeout_sec": 0, 00:36:08.083 "generate_uuids": false, 00:36:08.083 "high_priority_weight": 0, 00:36:08.083 "io_path_stat": false, 00:36:08.083 "io_queue_requests": 0, 00:36:08.083 "keep_alive_timeout_ms": 10000, 00:36:08.083 "low_priority_weight": 0, 00:36:08.083 "medium_priority_weight": 0, 00:36:08.083 "nvme_adminq_poll_period_us": 10000, 00:36:08.083 "nvme_error_stat": false, 00:36:08.083 "nvme_ioq_poll_period_us": 0, 00:36:08.083 "rdma_cm_event_timeout_ms": 0, 00:36:08.083 "rdma_max_cq_size": 0, 00:36:08.083 "rdma_srq_size": 0, 00:36:08.083 "reconnect_delay_sec": 0, 00:36:08.083 "timeout_admin_us": 0, 00:36:08.083 "timeout_us": 0, 00:36:08.083 "transport_ack_timeout": 0, 00:36:08.083 "transport_retry_count": 4, 00:36:08.083 "transport_tos": 0 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "bdev_nvme_set_hotplug", 00:36:08.083 "params": { 00:36:08.083 "enable": false, 00:36:08.083 "period_us": 100000 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "bdev_malloc_create", 00:36:08.083 "params": { 00:36:08.083 "block_size": 4096, 00:36:08.083 "name": "malloc0", 00:36:08.083 "num_blocks": 8192, 00:36:08.083 "optimal_io_boundary": 0, 00:36:08.083 "physical_block_size": 4096, 00:36:08.083 "uuid": "1b18103b-50ae-4b21-b955-59bdccff82d6" 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "bdev_wait_for_examine" 00:36:08.083 } 00:36:08.083 ] 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "subsystem": "nbd", 00:36:08.083 "config": [] 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "subsystem": "scheduler", 00:36:08.083 "config": [ 00:36:08.083 { 00:36:08.083 "method": "framework_set_scheduler", 00:36:08.083 "params": { 00:36:08.083 "name": "static" 00:36:08.083 } 00:36:08.083 } 00:36:08.083 ] 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "subsystem": "nvmf", 00:36:08.083 "config": [ 00:36:08.083 { 00:36:08.083 "method": "nvmf_set_config", 00:36:08.083 "params": { 00:36:08.083 "admin_cmd_passthru": { 00:36:08.083 "identify_ctrlr": false 00:36:08.083 }, 00:36:08.083 "discovery_filter": "match_any" 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "nvmf_set_max_subsystems", 00:36:08.083 "params": { 00:36:08.083 "max_subsystems": 1024 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "nvmf_set_crdt", 00:36:08.083 "params": { 00:36:08.083 "crdt1": 0, 00:36:08.083 "crdt2": 0, 00:36:08.083 "crdt3": 0 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "nvmf_create_transport", 00:36:08.083 "params": { 00:36:08.083 "abort_timeout_sec": 1, 00:36:08.083 "ack_timeout": 0, 00:36:08.083 "buf_cache_size": 4294967295, 00:36:08.083 "c2h_success": false, 00:36:08.083 "data_wr_pool_size": 0, 00:36:08.083 "dif_insert_or_strip": false, 00:36:08.083 "in_capsule_data_size": 4096, 00:36:08.083 "io_unit_size": 131072, 00:36:08.083 "max_aq_depth": 128, 00:36:08.083 "max_io_qpairs_per_ctrlr": 127, 00:36:08.083 "max_io_size": 131072, 00:36:08.083 "max_queue_depth": 128, 00:36:08.083 "num_shared_buffers": 511, 00:36:08.083 "sock_priority": 0, 00:36:08.083 "trtype": "TCP", 00:36:08.083 "zcopy": false 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "nvmf_create_subsystem", 00:36:08.083 "params": { 00:36:08.083 "allow_any_host": false, 00:36:08.083 "ana_reporting": false, 00:36:08.083 "max_cntlid": 65519, 00:36:08.083 "max_namespaces": 32, 00:36:08.083 "min_cntlid": 1, 00:36:08.083 "model_number": "SPDK bdev Controller", 00:36:08.083 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:08.083 "serial_number": "00000000000000000000" 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "nvmf_subsystem_add_host", 00:36:08.083 "params": { 00:36:08.083 "host": "nqn.2016-06.io.spdk:host1", 00:36:08.083 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:08.083 "psk": "key0" 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "nvmf_subsystem_add_ns", 00:36:08.083 "params": { 00:36:08.083 "namespace": { 00:36:08.083 "bdev_name": "malloc0", 00:36:08.083 "nguid": "1B18103B50AE4B21B95559BDCCFF82D6", 00:36:08.083 "no_auto_visible": false, 00:36:08.083 "nsid": 1, 00:36:08.083 "uuid": "1b18103b-50ae-4b21-b955-59bdccff82d6" 00:36:08.083 }, 00:36:08.083 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:36:08.083 } 00:36:08.083 }, 00:36:08.083 { 00:36:08.083 "method": "nvmf_subsystem_add_listener", 00:36:08.083 "params": { 00:36:08.083 "listen_address": { 00:36:08.083 "adrfam": "IPv4", 00:36:08.083 "traddr": "10.0.0.2", 00:36:08.083 "trsvcid": "4420", 00:36:08.083 "trtype": "TCP" 00:36:08.083 }, 00:36:08.083 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:08.083 "secure_channel": true 00:36:08.083 } 00:36:08.083 } 00:36:08.083 ] 00:36:08.083 } 00:36:08.083 ] 00:36:08.083 }' 00:36:08.083 01:02:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100699 00:36:08.083 01:02:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:36:08.083 01:02:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100699 00:36:08.083 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 100699 ']' 00:36:08.083 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.083 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:08.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.083 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.083 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:08.083 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:08.083 [2024-05-15 01:02:11.167245] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:08.083 [2024-05-15 01:02:11.167351] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:08.083 [2024-05-15 01:02:11.298849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.342 [2024-05-15 01:02:11.387207] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:08.342 [2024-05-15 01:02:11.387267] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:08.342 [2024-05-15 01:02:11.387279] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:08.342 [2024-05-15 01:02:11.387287] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:08.342 [2024-05-15 01:02:11.387295] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:08.342 [2024-05-15 01:02:11.387377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.342 [2024-05-15 01:02:11.617216] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:08.601 [2024-05-15 01:02:11.649075] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:08.601 [2024-05-15 01:02:11.649203] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:08.601 [2024-05-15 01:02:11.649368] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:08.859 01:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:08.859 01:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:36:08.859 01:02:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:08.859 01:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:08.859 01:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:09.118 01:02:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:09.118 01:02:12 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=100743 00:36:09.118 01:02:12 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 100743 /var/tmp/bdevperf.sock 00:36:09.118 01:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 100743 ']' 00:36:09.118 01:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:09.118 01:02:12 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:36:09.118 "subsystems": [ 00:36:09.118 { 00:36:09.118 "subsystem": "keyring", 00:36:09.118 "config": [ 00:36:09.118 { 00:36:09.118 "method": "keyring_file_add_key", 00:36:09.118 "params": { 00:36:09.118 "name": "key0", 00:36:09.118 "path": "/tmp/tmp.nM8lqbv8dG" 00:36:09.118 } 00:36:09.118 } 00:36:09.118 ] 00:36:09.118 }, 00:36:09.118 { 00:36:09.118 "subsystem": "iobuf", 00:36:09.118 "config": [ 00:36:09.118 { 00:36:09.118 "method": "iobuf_set_options", 00:36:09.118 "params": { 00:36:09.118 "large_bufsize": 135168, 00:36:09.118 "large_pool_count": 1024, 00:36:09.118 "small_bufsize": 8192, 00:36:09.118 "small_pool_count": 8192 00:36:09.118 } 00:36:09.118 } 00:36:09.118 ] 00:36:09.118 }, 00:36:09.118 { 00:36:09.118 "subsystem": "sock", 00:36:09.118 "config": [ 00:36:09.118 { 00:36:09.118 "method": "sock_impl_set_options", 00:36:09.118 "params": { 00:36:09.118 "enable_ktls": false, 00:36:09.118 "enable_placement_id": 0, 00:36:09.118 "enable_quickack": false, 00:36:09.118 "enable_recv_pipe": true, 00:36:09.118 "enable_zerocopy_send_client": false, 00:36:09.118 "enable_zerocopy_send_server": true, 00:36:09.118 "impl_name": "posix", 00:36:09.118 "recv_buf_size": 2097152, 00:36:09.118 "send_buf_size": 2097152, 00:36:09.118 "tls_version": 0, 00:36:09.118 "zerocopy_threshold": 0 00:36:09.118 } 00:36:09.118 }, 00:36:09.118 { 00:36:09.118 "method": "sock_impl_set_options", 00:36:09.118 "params": { 00:36:09.118 "enable_ktls": false, 00:36:09.118 "enable_placement_id": 0, 00:36:09.118 "enable_quickack": false, 00:36:09.118 "enable_recv_pipe": true, 00:36:09.118 "enable_zerocopy_send_client": false, 00:36:09.118 "enable_zerocopy_send_server": true, 00:36:09.118 "impl_name": "ssl", 00:36:09.118 "recv_buf_size": 4096, 00:36:09.118 "send_buf_size": 4096, 00:36:09.118 "tls_version": 0, 00:36:09.118 "zerocopy_threshold": 0 00:36:09.118 } 00:36:09.118 } 00:36:09.118 ] 00:36:09.118 }, 00:36:09.118 { 00:36:09.118 "subsystem": "vmd", 00:36:09.118 "config": [] 00:36:09.118 }, 00:36:09.118 { 00:36:09.118 "subsystem": "accel", 00:36:09.118 "config": [ 00:36:09.118 { 00:36:09.118 "method": "accel_set_options", 00:36:09.118 "params": { 00:36:09.118 "buf_count": 2048, 00:36:09.118 "large_cache_size": 16, 00:36:09.118 "sequence_count": 2048, 00:36:09.118 "small_cache_size": 128, 00:36:09.118 "task_count": 2048 00:36:09.118 } 00:36:09.118 } 00:36:09.118 ] 00:36:09.118 }, 00:36:09.118 { 00:36:09.118 "subsystem": "bdev", 00:36:09.118 "config": [ 00:36:09.118 { 00:36:09.118 "method": "bdev_set_options", 00:36:09.118 "params": { 00:36:09.118 "bdev_auto_examine": true, 00:36:09.118 "bdev_io_cache_size": 256, 00:36:09.118 "bdev_io_pool_size": 65535, 00:36:09.118 "iobuf_large_cache_size": 16, 00:36:09.118 "iobuf_small_cache_size": 128 00:36:09.118 } 00:36:09.118 }, 00:36:09.118 { 00:36:09.118 "method": "bdev_raid_set_options", 00:36:09.118 "params": { 00:36:09.118 "process_window_size_kb": 1024 00:36:09.118 } 00:36:09.118 }, 00:36:09.118 { 00:36:09.118 "method": "bdev_iscsi_set_options", 00:36:09.118 "params": { 00:36:09.118 "timeout_sec": 30 00:36:09.118 } 00:36:09.118 }, 00:36:09.118 { 00:36:09.118 "method": "bdev_nvme_set_options", 00:36:09.118 "params": { 00:36:09.118 "action_on_timeout": "none", 00:36:09.118 "allow_accel_sequence": false, 00:36:09.118 "arbitration_burst": 0, 00:36:09.118 "bdev_retry_count": 3, 00:36:09.118 "ctrlr_loss_timeout_sec": 0, 00:36:09.118 "delay_cmd_submit": true, 00:36:09.118 "dhchap_dhgroups": [ 00:36:09.118 "null", 00:36:09.118 "ffdhe2048", 00:36:09.118 "ffdhe3072", 00:36:09.118 "ffdhe4096", 00:36:09.118 "ffdhe6144", 00:36:09.118 "ffdhe8192" 00:36:09.119 ], 00:36:09.119 "dhchap_digests": [ 00:36:09.119 "sha256", 00:36:09.119 "sha384", 00:36:09.119 "sha512" 00:36:09.119 ], 00:36:09.119 "disable_auto_failback": false, 00:36:09.119 "fast_io_fail_timeout_sec": 0, 00:36:09.119 "generate_uuids": false, 00:36:09.119 "high_priority_weight": 0, 00:36:09.119 "io_path_stat": false, 00:36:09.119 "io_queue_requests": 512, 00:36:09.119 "keep_alive_timeout_ms": 10000, 00:36:09.119 "low_priority_weight": 0, 00:36:09.119 "medium_priority_weight": 0, 00:36:09.119 "nvme_adminq_poll_period_us": 10000, 00:36:09.119 "nvme_error_stat": false, 00:36:09.119 "nvme_ioq_poll_period_us": 0, 00:36:09.119 "rdma_cm_event_timeout_ms": 0, 00:36:09.119 "rdma_max_cq_size": 0, 00:36:09.119 "rdma_srq_size": 0, 00:36:09.119 "reconnect_delay_sec": 0, 00:36:09.119 "timeout_admin_us": 0, 00:36:09.119 "timeout_us": 0, 00:36:09.119 "transport_ack_timeout": 0, 00:36:09.119 "transport_retry_count": 4, 00:36:09.119 "transport_tos": 0 00:36:09.119 } 00:36:09.119 }, 00:36:09.119 { 00:36:09.119 "method": "bdev_nvme_attach_controller", 00:36:09.119 "params": { 00:36:09.119 "adrfam": "IPv4", 00:36:09.119 "ctrlr_loss_timeout_sec": 0, 00:36:09.119 "ddgst": false, 00:36:09.119 "fast_io_fail_timeout_sec": 0, 00:36:09.119 "hdgst": false, 00:36:09.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:09.119 "name": "nvme0", 00:36:09.119 "prchk_guard": false, 00:36:09.119 "prchk_reftag": false, 00:36:09.119 "psk": "key0", 00:36:09.119 "reconnect_delay_sec": 0, 00:36:09.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:09.119 "traddr": "10.0.0.2", 00:36:09.119 "trsvcid": "4420", 00:36:09.119 "trtype": "TCP" 00:36:09.119 } 00:36:09.119 }, 00:36:09.119 { 00:36:09.119 "method": "bdev_nvme_set_hotplug", 00:36:09.119 "params": { 00:36:09.119 "enable": false, 00:36:09.119 "period_us": 100000 00:36:09.119 } 00:36:09.119 }, 00:36:09.119 { 00:36:09.119 "method": "bdev_enable_histogram", 00:36:09.119 "params": { 00:36:09.119 "enable": true, 00:36:09.119 "name": "nvme0n1" 00:36:09.119 } 00:36:09.119 }, 00:36:09.119 { 00:36:09.119 "method": "bdev_wait_for_examine" 00:36:09.119 } 00:36:09.119 ] 00:36:09.119 }, 00:36:09.119 { 00:36:09.119 "subsystem": "nbd", 00:36:09.119 "config": [] 00:36:09.119 } 00:36:09.119 ] 00:36:09.119 }' 00:36:09.119 01:02:12 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:36:09.119 01:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:09.119 01:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:09.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:09.119 01:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:09.119 01:02:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:09.119 [2024-05-15 01:02:12.205854] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:09.119 [2024-05-15 01:02:12.205945] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100743 ] 00:36:09.119 [2024-05-15 01:02:12.344676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.377 [2024-05-15 01:02:12.427012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:09.377 [2024-05-15 01:02:12.593875] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:09.944 01:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:09.944 01:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:36:09.944 01:02:13 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:09.944 01:02:13 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:36:10.510 01:02:13 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.510 01:02:13 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:10.510 Running I/O for 1 seconds... 00:36:11.449 00:36:11.449 Latency(us) 00:36:11.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.449 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:11.449 Verification LBA range: start 0x0 length 0x2000 00:36:11.449 nvme0n1 : 1.03 3810.74 14.89 0.00 0.00 33083.10 8281.37 21567.30 00:36:11.449 =================================================================================================================== 00:36:11.449 Total : 3810.74 14.89 0.00 0.00 33083.10 8281.37 21567.30 00:36:11.449 0 00:36:11.449 01:02:14 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:36:11.449 01:02:14 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:36:11.449 01:02:14 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:36:11.449 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # type=--id 00:36:11.449 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # id=0 00:36:11.449 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:36:11.449 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:36:11.449 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:36:11.449 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:36:11.449 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # for n in $shm_files 00:36:11.449 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:36:11.449 nvmf_trace.0 00:36:11.707 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # return 0 00:36:11.707 01:02:14 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 100743 00:36:11.707 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 100743 ']' 00:36:11.707 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 100743 00:36:11.707 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:36:11.707 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:11.707 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 100743 00:36:11.707 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:11.707 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:11.707 killing process with pid 100743 00:36:11.707 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 100743' 00:36:11.707 Received shutdown signal, test time was about 1.000000 seconds 00:36:11.707 00:36:11.707 Latency(us) 00:36:11.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.707 =================================================================================================================== 00:36:11.707 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:11.707 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 100743 00:36:11.707 01:02:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 100743 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:11.968 rmmod nvme_tcp 00:36:11.968 rmmod nvme_fabrics 00:36:11.968 rmmod nvme_keyring 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 100699 ']' 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 100699 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 100699 ']' 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 100699 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 100699 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:36:11.968 killing process with pid 100699 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 100699' 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 100699 00:36:11.968 [2024-05-15 01:02:15.167329] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:11.968 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 100699 00:36:12.227 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:12.227 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:12.227 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:12.227 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:12.227 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:12.227 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.227 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:12.227 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.227 01:02:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:36:12.227 01:02:15 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.eTIIZlZ31S /tmp/tmp.w9ewW0btZe /tmp/tmp.nM8lqbv8dG 00:36:12.227 00:36:12.227 real 1m26.495s 00:36:12.227 user 2m17.332s 00:36:12.227 sys 0m28.108s 00:36:12.227 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:12.227 01:02:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:12.227 ************************************ 00:36:12.227 END TEST nvmf_tls 00:36:12.227 ************************************ 00:36:12.227 01:02:15 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:36:12.227 01:02:15 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:12.227 01:02:15 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:12.227 01:02:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:12.227 ************************************ 00:36:12.227 START TEST nvmf_fips 00:36:12.227 ************************************ 00:36:12.227 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:36:12.486 * Looking for test storage... 00:36:12.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:36:12.486 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:36:12.487 Error setting digest 00:36:12.487 00E2C3AA5A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:36:12.487 00E2C3AA5A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:36:12.487 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:36:12.488 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:36:12.746 Cannot find device "nvmf_tgt_br" 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:36:12.746 Cannot find device "nvmf_tgt_br2" 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:36:12.746 Cannot find device "nvmf_tgt_br" 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:36:12.746 Cannot find device "nvmf_tgt_br2" 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:12.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:12.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:36:12.746 01:02:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:36:12.746 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:12.746 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:13.004 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:13.004 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:13.004 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:36:13.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:36:13.004 00:36:13.004 --- 10.0.0.2 ping statistics --- 00:36:13.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.004 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:36:13.004 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:36:13.004 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:13.004 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:36:13.004 00:36:13.004 --- 10.0.0.3 ping statistics --- 00:36:13.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.004 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:36:13.004 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:13.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:36:13.004 00:36:13.004 --- 10.0.0.1 ping statistics --- 00:36:13.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.004 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:36:13.004 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.004 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=101023 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 101023 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 101023 ']' 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:13.005 01:02:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:36:13.005 [2024-05-15 01:02:16.176419] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:13.005 [2024-05-15 01:02:16.176529] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:13.262 [2024-05-15 01:02:16.311511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.262 [2024-05-15 01:02:16.405623] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.262 [2024-05-15 01:02:16.405693] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.262 [2024-05-15 01:02:16.405720] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.262 [2024-05-15 01:02:16.405729] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.262 [2024-05-15 01:02:16.405736] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.262 [2024-05-15 01:02:16.405766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.195 01:02:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:14.195 01:02:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:36:14.195 01:02:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:14.195 01:02:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:14.195 01:02:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:36:14.195 01:02:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:14.195 01:02:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:36:14.195 01:02:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:36:14.195 01:02:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:36:14.195 01:02:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:36:14.195 01:02:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:36:14.195 01:02:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:36:14.195 01:02:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:36:14.196 01:02:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:14.196 [2024-05-15 01:02:17.463681] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:14.196 [2024-05-15 01:02:17.479627] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:14.196 [2024-05-15 01:02:17.479724] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:14.196 [2024-05-15 01:02:17.479931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.454 [2024-05-15 01:02:17.510330] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:14.454 malloc0 00:36:14.454 01:02:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:14.454 01:02:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=101081 00:36:14.454 01:02:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:36:14.454 01:02:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 101081 /var/tmp/bdevperf.sock 00:36:14.454 01:02:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 101081 ']' 00:36:14.454 01:02:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:14.454 01:02:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:14.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:14.454 01:02:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:14.454 01:02:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:14.454 01:02:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:36:14.454 [2024-05-15 01:02:17.619616] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:14.454 [2024-05-15 01:02:17.619723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101081 ] 00:36:14.713 [2024-05-15 01:02:17.763355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.713 [2024-05-15 01:02:17.867546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:15.280 01:02:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:15.280 01:02:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:36:15.280 01:02:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:36:15.538 [2024-05-15 01:02:18.773132] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:15.538 [2024-05-15 01:02:18.773670] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:36:15.796 TLSTESTn1 00:36:15.796 01:02:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:15.796 Running I/O for 10 seconds... 00:36:25.771 00:36:25.771 Latency(us) 00:36:25.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.771 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:25.771 Verification LBA range: start 0x0 length 0x2000 00:36:25.771 TLSTESTn1 : 10.03 3891.92 15.20 0.00 0.00 32822.57 7864.32 21567.30 00:36:25.771 =================================================================================================================== 00:36:25.771 Total : 3891.92 15.20 0.00 0.00 32822.57 7864.32 21567.30 00:36:25.771 0 00:36:25.771 01:02:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:36:25.771 01:02:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:36:25.771 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # type=--id 00:36:25.771 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # id=0 00:36:25.771 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:36:25.771 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:36:25.771 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:36:25.771 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:36:25.771 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # for n in $shm_files 00:36:25.771 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:36:25.771 nvmf_trace.0 00:36:26.030 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # return 0 00:36:26.030 01:02:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 101081 00:36:26.030 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 101081 ']' 00:36:26.030 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 101081 00:36:26.030 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:36:26.030 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:26.030 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 101081 00:36:26.030 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:36:26.030 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:36:26.030 killing process with pid 101081 00:36:26.030 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 101081' 00:36:26.030 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 101081 00:36:26.030 Received shutdown signal, test time was about 10.000000 seconds 00:36:26.030 00:36:26.030 Latency(us) 00:36:26.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:26.030 =================================================================================================================== 00:36:26.030 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:26.030 [2024-05-15 01:02:29.126266] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:36:26.030 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 101081 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:26.289 rmmod nvme_tcp 00:36:26.289 rmmod nvme_fabrics 00:36:26.289 rmmod nvme_keyring 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 101023 ']' 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 101023 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 101023 ']' 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 101023 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 101023 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:26.289 killing process with pid 101023 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 101023' 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 101023 00:36:26.289 [2024-05-15 01:02:29.464360] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:26.289 [2024-05-15 01:02:29.464402] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:26.289 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 101023 00:36:26.551 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:26.551 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:26.551 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:26.551 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:26.551 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:26.551 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.551 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:26.551 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.551 01:02:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:36:26.551 01:02:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:36:26.551 00:36:26.551 real 0m14.263s 00:36:26.551 user 0m19.108s 00:36:26.551 sys 0m5.887s 00:36:26.551 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:26.551 ************************************ 00:36:26.551 END TEST nvmf_fips 00:36:26.551 01:02:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:36:26.551 ************************************ 00:36:26.551 01:02:29 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:36:26.551 01:02:29 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:36:26.551 01:02:29 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:26.551 01:02:29 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:26.551 01:02:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:26.551 ************************************ 00:36:26.551 START TEST nvmf_fuzz 00:36:26.551 ************************************ 00:36:26.551 01:02:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:36:26.819 * Looking for test storage... 00:36:26.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:26.819 01:02:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:26.819 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:36:26.819 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:26.819 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:36:26.820 Cannot find device "nvmf_tgt_br" 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:36:26.820 Cannot find device "nvmf_tgt_br2" 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:36:26.820 Cannot find device "nvmf_tgt_br" 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:36:26.820 Cannot find device "nvmf_tgt_br2" 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:36:26.820 01:02:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:36:26.820 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:36:26.820 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:26.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:26.820 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:36:26.820 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:26.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:26.820 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:36:26.820 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:36:26.820 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:26.820 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:26.820 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:26.820 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:26.820 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:26.821 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:26.821 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:36:27.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:27.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:36:27.080 00:36:27.080 --- 10.0.0.2 ping statistics --- 00:36:27.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.080 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:36:27.080 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:27.080 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:36:27.080 00:36:27.080 --- 10.0.0.3 ping statistics --- 00:36:27.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.080 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:36:27.080 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:27.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:27.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:36:27.080 00:36:27.080 --- 10.0.0.1 ping statistics --- 00:36:27.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.081 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=101423 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 101423 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@828 -- # '[' -z 101423 ']' 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:27.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:27.081 01:02:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@861 -- # return 0 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:28.459 Malloc0 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:36:28.459 01:02:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:36:28.718 Shutting down the fuzz application 00:36:28.718 01:02:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:36:28.976 Shutting down the fuzz application 00:36:28.976 01:02:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:28.976 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:28.976 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:28.976 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:28.976 01:02:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:28.976 01:02:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:36:28.976 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:28.976 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:28.977 rmmod nvme_tcp 00:36:28.977 rmmod nvme_fabrics 00:36:28.977 rmmod nvme_keyring 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 101423 ']' 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 101423 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@947 -- # '[' -z 101423 ']' 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # kill -0 101423 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # uname 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 101423 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:36:28.977 killing process with pid 101423 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # echo 'killing process with pid 101423' 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # kill 101423 00:36:28.977 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@971 -- # wait 101423 00:36:29.235 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:29.235 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:29.235 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:29.235 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:29.235 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:29.235 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:29.235 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:29.235 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:29.495 01:02:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:36:29.495 01:02:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:36:29.495 00:36:29.495 real 0m2.766s 00:36:29.495 user 0m2.972s 00:36:29.495 sys 0m0.648s 00:36:29.495 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:29.495 ************************************ 00:36:29.495 END TEST nvmf_fuzz 00:36:29.495 ************************************ 00:36:29.495 01:02:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:29.495 01:02:32 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:36:29.495 01:02:32 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:29.495 01:02:32 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:29.495 01:02:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:29.495 ************************************ 00:36:29.495 START TEST nvmf_multiconnection 00:36:29.495 ************************************ 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:36:29.495 * Looking for test storage... 00:36:29.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:36:29.495 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:36:29.495 Cannot find device "nvmf_tgt_br" 00:36:29.496 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:36:29.496 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:36:29.496 Cannot find device "nvmf_tgt_br2" 00:36:29.496 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:36:29.496 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:36:29.496 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:36:29.496 Cannot find device "nvmf_tgt_br" 00:36:29.496 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:36:29.496 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:36:29.496 Cannot find device "nvmf_tgt_br2" 00:36:29.496 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:36:29.496 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:36:29.754 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:36:29.754 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:29.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:29.754 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:36:29.754 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:29.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:29.755 01:02:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:36:29.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:29.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:36:29.755 00:36:29.755 --- 10.0.0.2 ping statistics --- 00:36:29.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.755 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:36:29.755 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:29.755 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:36:29.755 00:36:29.755 --- 10.0.0.3 ping statistics --- 00:36:29.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.755 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:29.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:29.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:36:29.755 00:36:29.755 --- 10.0.0.1 ping statistics --- 00:36:29.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.755 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=101640 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 101640 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@828 -- # '[' -z 101640 ']' 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:29.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:29.755 01:02:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:30.013 [2024-05-15 01:02:33.099557] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:30.013 [2024-05-15 01:02:33.099677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:30.013 [2024-05-15 01:02:33.247660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:30.273 [2024-05-15 01:02:33.346573] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:30.273 [2024-05-15 01:02:33.346639] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:30.273 [2024-05-15 01:02:33.346654] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:30.273 [2024-05-15 01:02:33.346665] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:30.273 [2024-05-15 01:02:33.346674] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:30.273 [2024-05-15 01:02:33.346854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.273 [2024-05-15 01:02:33.347151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:30.273 [2024-05-15 01:02:33.347859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:30.273 [2024-05-15 01:02:33.347869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.840 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:30.840 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@861 -- # return 0 00:36:30.840 01:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:30.840 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:30.840 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.099 [2024-05-15 01:02:34.151808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.099 Malloc1 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.099 [2024-05-15 01:02:34.236935] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:31.099 [2024-05-15 01:02:34.237172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.099 Malloc2 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.099 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.100 Malloc3 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.100 Malloc4 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.100 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.359 Malloc5 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.359 Malloc6 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.359 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.360 Malloc7 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.360 Malloc8 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.360 Malloc9 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.360 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.619 Malloc10 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.619 Malloc11 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:31.619 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:31.878 01:02:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:36:31.878 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:36:31.878 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:36:31.878 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:36:31.878 01:02:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:36:33.779 01:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:36:33.779 01:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:36:33.779 01:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK1 00:36:33.779 01:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:36:33.779 01:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:36:33.779 01:02:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:36:33.779 01:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:33.779 01:02:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:36:34.037 01:02:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:36:34.037 01:02:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:36:34.037 01:02:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:36:34.037 01:02:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:36:34.037 01:02:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:36:35.939 01:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:36:35.939 01:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:36:35.939 01:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK2 00:36:35.939 01:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:36:35.939 01:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:36:35.939 01:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:36:35.939 01:02:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:35.939 01:02:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:36:36.197 01:02:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:36:36.197 01:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:36:36.197 01:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:36:36.197 01:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:36:36.197 01:02:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:36:38.137 01:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:36:38.137 01:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:36:38.137 01:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK3 00:36:38.137 01:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:36:38.137 01:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:36:38.137 01:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:36:38.137 01:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:38.137 01:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:36:38.396 01:02:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:36:38.396 01:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:36:38.396 01:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:36:38.396 01:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:36:38.396 01:02:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:36:40.313 01:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:36:40.313 01:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:36:40.313 01:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK4 00:36:40.313 01:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:36:40.313 01:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:36:40.313 01:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:36:40.313 01:02:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:40.313 01:02:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:36:40.584 01:02:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:36:40.584 01:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:36:40.584 01:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:36:40.584 01:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:36:40.584 01:02:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:36:42.490 01:02:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:36:42.490 01:02:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK5 00:36:42.490 01:02:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:36:42.490 01:02:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:36:42.490 01:02:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:36:42.490 01:02:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:36:42.490 01:02:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:42.490 01:02:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:36:42.747 01:02:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:36:42.747 01:02:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:36:42.747 01:02:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:36:42.747 01:02:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:36:42.747 01:02:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:36:44.647 01:02:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:36:44.647 01:02:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:36:44.647 01:02:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK6 00:36:44.647 01:02:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:36:44.647 01:02:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:36:44.647 01:02:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:36:44.647 01:02:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:44.647 01:02:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:36:44.904 01:02:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:36:44.904 01:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:36:44.904 01:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:36:44.904 01:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:36:44.904 01:02:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:36:47.430 01:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:36:47.430 01:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:36:47.430 01:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK7 00:36:47.430 01:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:36:47.430 01:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:36:47.430 01:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:36:47.430 01:02:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:47.430 01:02:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:36:47.430 01:02:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:36:47.430 01:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:36:47.430 01:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:36:47.430 01:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:36:47.430 01:02:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:36:49.346 01:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:36:49.346 01:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:36:49.346 01:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK8 00:36:49.346 01:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:36:49.346 01:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:36:49.346 01:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:36:49.346 01:02:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:49.346 01:02:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:36:49.346 01:02:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:36:49.346 01:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:36:49.346 01:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:36:49.346 01:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:36:49.346 01:02:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:36:51.276 01:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:36:51.276 01:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:36:51.276 01:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK9 00:36:51.276 01:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:36:51.276 01:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:36:51.276 01:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:36:51.277 01:02:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:51.277 01:02:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:36:51.535 01:02:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:36:51.535 01:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:36:51.535 01:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:36:51.535 01:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:36:51.535 01:02:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:36:53.446 01:02:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:36:53.446 01:02:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:36:53.446 01:02:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK10 00:36:53.446 01:02:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:36:53.446 01:02:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:36:53.446 01:02:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:36:53.446 01:02:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:53.446 01:02:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:36:53.704 01:02:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:36:53.704 01:02:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:36:53.704 01:02:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:36:53.704 01:02:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:36:53.704 01:02:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:36:55.607 01:02:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:36:55.866 01:02:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:36:55.866 01:02:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK11 00:36:55.866 01:02:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:36:55.866 01:02:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:36:55.866 01:02:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:36:55.866 01:02:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:36:55.866 [global] 00:36:55.866 thread=1 00:36:55.866 invalidate=1 00:36:55.866 rw=read 00:36:55.866 time_based=1 00:36:55.866 runtime=10 00:36:55.866 ioengine=libaio 00:36:55.866 direct=1 00:36:55.866 bs=262144 00:36:55.866 iodepth=64 00:36:55.866 norandommap=1 00:36:55.866 numjobs=1 00:36:55.866 00:36:55.866 [job0] 00:36:55.866 filename=/dev/nvme0n1 00:36:55.866 [job1] 00:36:55.866 filename=/dev/nvme10n1 00:36:55.866 [job2] 00:36:55.866 filename=/dev/nvme1n1 00:36:55.866 [job3] 00:36:55.866 filename=/dev/nvme2n1 00:36:55.866 [job4] 00:36:55.866 filename=/dev/nvme3n1 00:36:55.866 [job5] 00:36:55.866 filename=/dev/nvme4n1 00:36:55.866 [job6] 00:36:55.866 filename=/dev/nvme5n1 00:36:55.866 [job7] 00:36:55.866 filename=/dev/nvme6n1 00:36:55.866 [job8] 00:36:55.866 filename=/dev/nvme7n1 00:36:55.866 [job9] 00:36:55.866 filename=/dev/nvme8n1 00:36:55.866 [job10] 00:36:55.866 filename=/dev/nvme9n1 00:36:55.866 Could not set queue depth (nvme0n1) 00:36:55.866 Could not set queue depth (nvme10n1) 00:36:55.866 Could not set queue depth (nvme1n1) 00:36:55.866 Could not set queue depth (nvme2n1) 00:36:55.866 Could not set queue depth (nvme3n1) 00:36:55.866 Could not set queue depth (nvme4n1) 00:36:55.866 Could not set queue depth (nvme5n1) 00:36:55.866 Could not set queue depth (nvme6n1) 00:36:55.866 Could not set queue depth (nvme7n1) 00:36:55.866 Could not set queue depth (nvme8n1) 00:36:55.866 Could not set queue depth (nvme9n1) 00:36:56.124 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:56.124 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:56.124 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:56.124 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:56.124 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:56.124 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:56.124 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:56.124 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:56.125 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:56.125 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:56.125 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:56.125 fio-3.35 00:36:56.125 Starting 11 threads 00:37:08.327 00:37:08.327 job0: (groupid=0, jobs=1): err= 0: pid=102113: Wed May 15 01:03:09 2024 00:37:08.327 read: IOPS=565, BW=141MiB/s (148MB/s)(1430MiB/10115msec) 00:37:08.327 slat (usec): min=17, max=80798, avg=1728.68, stdev=6343.59 00:37:08.327 clat (msec): min=23, max=263, avg=111.28, stdev=20.74 00:37:08.327 lat (msec): min=23, max=282, avg=113.01, stdev=21.74 00:37:08.327 clat percentiles (msec): 00:37:08.327 | 1.00th=[ 53], 5.00th=[ 82], 10.00th=[ 87], 20.00th=[ 95], 00:37:08.327 | 30.00th=[ 104], 40.00th=[ 110], 50.00th=[ 114], 60.00th=[ 116], 00:37:08.327 | 70.00th=[ 121], 80.00th=[ 125], 90.00th=[ 131], 95.00th=[ 138], 00:37:08.327 | 99.00th=[ 176], 99.50th=[ 205], 99.90th=[ 264], 99.95th=[ 264], 00:37:08.327 | 99.99th=[ 264] 00:37:08.327 bw ( KiB/s): min=112415, max=183808, per=7.04%, avg=144856.85, stdev=20369.20, samples=20 00:37:08.327 iops : min= 439, max= 718, avg=565.65, stdev=79.53, samples=20 00:37:08.327 lat (msec) : 50=0.96%, 100=23.83%, 250=75.09%, 500=0.12% 00:37:08.327 cpu : usr=0.14%, sys=1.78%, ctx=1095, majf=0, minf=4097 00:37:08.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:37:08.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:08.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:08.327 issued rwts: total=5720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:08.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:08.327 job1: (groupid=0, jobs=1): err= 0: pid=102114: Wed May 15 01:03:09 2024 00:37:08.327 read: IOPS=582, BW=146MiB/s (153MB/s)(1474MiB/10124msec) 00:37:08.327 slat (usec): min=17, max=54440, avg=1668.16, stdev=5369.46 00:37:08.327 clat (msec): min=20, max=281, avg=108.02, stdev=26.99 00:37:08.327 lat (msec): min=21, max=281, avg=109.69, stdev=27.74 00:37:08.327 clat percentiles (msec): 00:37:08.327 | 1.00th=[ 40], 5.00th=[ 58], 10.00th=[ 68], 20.00th=[ 88], 00:37:08.327 | 30.00th=[ 96], 40.00th=[ 109], 50.00th=[ 114], 60.00th=[ 117], 00:37:08.327 | 70.00th=[ 122], 80.00th=[ 127], 90.00th=[ 136], 95.00th=[ 146], 00:37:08.327 | 99.00th=[ 165], 99.50th=[ 186], 99.90th=[ 284], 99.95th=[ 284], 00:37:08.327 | 99.99th=[ 284] 00:37:08.327 bw ( KiB/s): min=108327, max=274944, per=7.25%, avg=149240.30, stdev=36291.09, samples=20 00:37:08.327 iops : min= 423, max= 1074, avg=582.80, stdev=141.74, samples=20 00:37:08.327 lat (msec) : 50=2.34%, 100=30.17%, 250=67.37%, 500=0.12% 00:37:08.327 cpu : usr=0.26%, sys=1.99%, ctx=1210, majf=0, minf=4097 00:37:08.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:37:08.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:08.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:08.327 issued rwts: total=5896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:08.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:08.327 job2: (groupid=0, jobs=1): err= 0: pid=102115: Wed May 15 01:03:09 2024 00:37:08.327 read: IOPS=702, BW=176MiB/s (184MB/s)(1779MiB/10122msec) 00:37:08.327 slat (usec): min=16, max=74082, avg=1378.30, stdev=5426.98 00:37:08.327 clat (msec): min=19, max=263, avg=89.50, stdev=26.06 00:37:08.327 lat (msec): min=19, max=263, avg=90.88, stdev=26.81 00:37:08.327 clat percentiles (msec): 00:37:08.327 | 1.00th=[ 47], 5.00th=[ 54], 10.00th=[ 59], 20.00th=[ 65], 00:37:08.327 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 88], 60.00th=[ 93], 00:37:08.327 | 70.00th=[ 100], 80.00th=[ 112], 90.00th=[ 125], 95.00th=[ 138], 00:37:08.327 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 245], 99.95th=[ 245], 00:37:08.327 | 99.99th=[ 264] 00:37:08.327 bw ( KiB/s): min=116502, max=276008, per=8.77%, avg=180508.80, stdev=45537.19, samples=20 00:37:08.327 iops : min= 455, max= 1078, avg=704.90, stdev=177.89, samples=20 00:37:08.327 lat (msec) : 20=0.03%, 50=2.14%, 100=68.51%, 250=29.28%, 500=0.04% 00:37:08.327 cpu : usr=0.19%, sys=2.22%, ctx=1277, majf=0, minf=4097 00:37:08.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:37:08.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:08.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:08.327 issued rwts: total=7114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:08.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:08.327 job3: (groupid=0, jobs=1): err= 0: pid=102116: Wed May 15 01:03:09 2024 00:37:08.327 read: IOPS=669, BW=167MiB/s (176MB/s)(1701MiB/10154msec) 00:37:08.327 slat (usec): min=17, max=61065, avg=1455.72, stdev=5100.42 00:37:08.327 clat (msec): min=9, max=294, avg=93.95, stdev=21.82 00:37:08.327 lat (msec): min=10, max=294, avg=95.40, stdev=22.37 00:37:08.327 clat percentiles (msec): 00:37:08.327 | 1.00th=[ 66], 5.00th=[ 74], 10.00th=[ 78], 20.00th=[ 83], 00:37:08.327 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 93], 00:37:08.327 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 117], 95.00th=[ 125], 00:37:08.327 | 99.00th=[ 153], 99.50th=[ 271], 99.90th=[ 292], 99.95th=[ 296], 00:37:08.327 | 99.99th=[ 296] 00:37:08.327 bw ( KiB/s): min=112415, max=205723, per=8.38%, avg=172422.30, stdev=22830.84, samples=20 00:37:08.327 iops : min= 439, max= 803, avg=673.25, stdev=89.12, samples=20 00:37:08.327 lat (msec) : 10=0.01%, 20=0.24%, 50=0.40%, 100=77.23%, 250=21.57% 00:37:08.327 lat (msec) : 500=0.56% 00:37:08.327 cpu : usr=0.29%, sys=2.39%, ctx=1315, majf=0, minf=4097 00:37:08.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:37:08.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:08.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:08.327 issued rwts: total=6802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:08.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:08.327 job4: (groupid=0, jobs=1): err= 0: pid=102117: Wed May 15 01:03:09 2024 00:37:08.327 read: IOPS=869, BW=217MiB/s (228MB/s)(2207MiB/10151msec) 00:37:08.327 slat (usec): min=17, max=94173, avg=1110.21, stdev=4372.58 00:37:08.327 clat (usec): min=1251, max=308427, avg=72351.30, stdev=36424.93 00:37:08.327 lat (usec): min=1290, max=308456, avg=73461.52, stdev=37037.42 00:37:08.327 clat percentiles (msec): 00:37:08.327 | 1.00th=[ 8], 5.00th=[ 22], 10.00th=[ 29], 20.00th=[ 43], 00:37:08.327 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 84], 00:37:08.328 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 124], 00:37:08.328 | 99.00th=[ 213], 99.50th=[ 243], 99.90th=[ 275], 99.95th=[ 296], 00:37:08.328 | 99.99th=[ 309] 00:37:08.328 bw ( KiB/s): min=102706, max=505843, per=10.91%, avg=224538.10, stdev=102387.51, samples=20 00:37:08.328 iops : min= 401, max= 1975, avg=876.85, stdev=399.75, samples=20 00:37:08.328 lat (msec) : 2=0.06%, 4=0.29%, 10=0.74%, 20=2.77%, 50=20.34% 00:37:08.328 lat (msec) : 100=60.81%, 250=14.71%, 500=0.27% 00:37:08.328 cpu : usr=0.32%, sys=2.75%, ctx=1794, majf=0, minf=4097 00:37:08.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:37:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:08.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:08.328 issued rwts: total=8829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:08.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:08.328 job5: (groupid=0, jobs=1): err= 0: pid=102118: Wed May 15 01:03:09 2024 00:37:08.328 read: IOPS=1132, BW=283MiB/s (297MB/s)(2865MiB/10116msec) 00:37:08.328 slat (usec): min=16, max=78569, avg=867.99, stdev=4056.60 00:37:08.328 clat (msec): min=13, max=272, avg=55.53, stdev=40.83 00:37:08.328 lat (msec): min=13, max=273, avg=56.40, stdev=41.57 00:37:08.328 clat percentiles (msec): 00:37:08.328 | 1.00th=[ 18], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 27], 00:37:08.328 | 30.00th=[ 29], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 39], 00:37:08.328 | 70.00th=[ 57], 80.00th=[ 111], 90.00th=[ 122], 95.00th=[ 132], 00:37:08.328 | 99.00th=[ 153], 99.50th=[ 165], 99.90th=[ 251], 99.95th=[ 251], 00:37:08.328 | 99.99th=[ 275] 00:37:08.328 bw ( KiB/s): min=113948, max=558592, per=14.17%, avg=291778.15, stdev=190991.47, samples=20 00:37:08.328 iops : min= 445, max= 2182, avg=1139.60, stdev=746.05, samples=20 00:37:08.328 lat (msec) : 20=3.72%, 50=62.99%, 100=8.91%, 250=24.23%, 500=0.15% 00:37:08.328 cpu : usr=0.46%, sys=2.98%, ctx=2105, majf=0, minf=4097 00:37:08.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:37:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:08.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:08.328 issued rwts: total=11459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:08.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:08.328 job6: (groupid=0, jobs=1): err= 0: pid=102119: Wed May 15 01:03:09 2024 00:37:08.328 read: IOPS=639, BW=160MiB/s (168MB/s)(1623MiB/10150msec) 00:37:08.328 slat (usec): min=17, max=74109, avg=1502.92, stdev=5101.73 00:37:08.328 clat (msec): min=28, max=306, avg=98.33, stdev=23.78 00:37:08.328 lat (msec): min=29, max=306, avg=99.84, stdev=24.37 00:37:08.328 clat percentiles (msec): 00:37:08.328 | 1.00th=[ 67], 5.00th=[ 77], 10.00th=[ 81], 20.00th=[ 85], 00:37:08.328 | 30.00th=[ 87], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 96], 00:37:08.328 | 70.00th=[ 103], 80.00th=[ 114], 90.00th=[ 122], 95.00th=[ 130], 00:37:08.328 | 99.00th=[ 155], 99.50th=[ 257], 99.90th=[ 305], 99.95th=[ 309], 00:37:08.328 | 99.99th=[ 309] 00:37:08.328 bw ( KiB/s): min=107305, max=197120, per=8.00%, avg=164607.10, stdev=25340.99, samples=20 00:37:08.328 iops : min= 419, max= 770, avg=642.65, stdev=99.03, samples=20 00:37:08.328 lat (msec) : 50=0.59%, 100=67.47%, 250=31.19%, 500=0.75% 00:37:08.328 cpu : usr=0.26%, sys=1.97%, ctx=1321, majf=0, minf=4097 00:37:08.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:37:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:08.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:08.328 issued rwts: total=6493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:08.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:08.328 job7: (groupid=0, jobs=1): err= 0: pid=102120: Wed May 15 01:03:09 2024 00:37:08.328 read: IOPS=671, BW=168MiB/s (176MB/s)(1706MiB/10153msec) 00:37:08.328 slat (usec): min=17, max=52552, avg=1438.47, stdev=4924.90 00:37:08.328 clat (msec): min=23, max=294, avg=93.62, stdev=22.57 00:37:08.328 lat (msec): min=24, max=294, avg=95.06, stdev=23.18 00:37:08.328 clat percentiles (msec): 00:37:08.328 | 1.00th=[ 50], 5.00th=[ 62], 10.00th=[ 74], 20.00th=[ 82], 00:37:08.328 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 91], 60.00th=[ 94], 00:37:08.328 | 70.00th=[ 99], 80.00th=[ 105], 90.00th=[ 121], 95.00th=[ 128], 00:37:08.328 | 99.00th=[ 150], 99.50th=[ 224], 99.90th=[ 292], 99.95th=[ 292], 00:37:08.328 | 99.99th=[ 296] 00:37:08.328 bw ( KiB/s): min=118035, max=237056, per=8.41%, avg=173015.95, stdev=25471.05, samples=20 00:37:08.328 iops : min= 461, max= 926, avg=675.60, stdev=99.54, samples=20 00:37:08.328 lat (msec) : 50=1.11%, 100=73.86%, 250=24.71%, 500=0.31% 00:37:08.328 cpu : usr=0.27%, sys=2.11%, ctx=1379, majf=0, minf=4097 00:37:08.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:37:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:08.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:08.328 issued rwts: total=6822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:08.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:08.328 job8: (groupid=0, jobs=1): err= 0: pid=102121: Wed May 15 01:03:09 2024 00:37:08.328 read: IOPS=913, BW=228MiB/s (240MB/s)(2288MiB/10017msec) 00:37:08.328 slat (usec): min=16, max=79206, avg=1051.09, stdev=3927.11 00:37:08.328 clat (usec): min=1071, max=152361, avg=68889.57, stdev=29974.53 00:37:08.328 lat (usec): min=1844, max=192688, avg=69940.65, stdev=30588.83 00:37:08.328 clat percentiles (msec): 00:37:08.328 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 26], 20.00th=[ 35], 00:37:08.328 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 81], 00:37:08.328 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 118], 00:37:08.328 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 146], 00:37:08.328 | 99.99th=[ 153] 00:37:08.328 bw ( KiB/s): min=129024, max=579960, per=11.31%, avg=232716.50, stdev=104650.99, samples=20 00:37:08.328 iops : min= 504, max= 2265, avg=908.80, stdev=408.71, samples=20 00:37:08.328 lat (msec) : 2=0.03%, 4=0.15%, 10=0.95%, 20=2.73%, 50=20.59% 00:37:08.328 lat (msec) : 100=59.51%, 250=16.04% 00:37:08.328 cpu : usr=0.38%, sys=2.45%, ctx=1998, majf=0, minf=4097 00:37:08.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:37:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:08.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:08.328 issued rwts: total=9152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:08.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:08.328 job9: (groupid=0, jobs=1): err= 0: pid=102122: Wed May 15 01:03:09 2024 00:37:08.328 read: IOPS=760, BW=190MiB/s (199MB/s)(1923MiB/10121msec) 00:37:08.328 slat (usec): min=18, max=107980, avg=1282.80, stdev=5289.76 00:37:08.328 clat (msec): min=13, max=248, avg=82.76, stdev=38.21 00:37:08.328 lat (msec): min=13, max=329, avg=84.04, stdev=39.01 00:37:08.328 clat percentiles (msec): 00:37:08.328 | 1.00th=[ 19], 5.00th=[ 26], 10.00th=[ 31], 20.00th=[ 41], 00:37:08.328 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 87], 60.00th=[ 96], 00:37:08.328 | 70.00th=[ 111], 80.00th=[ 120], 90.00th=[ 128], 95.00th=[ 134], 00:37:08.328 | 99.00th=[ 155], 99.50th=[ 226], 99.90th=[ 230], 99.95th=[ 249], 00:37:08.328 | 99.99th=[ 249] 00:37:08.328 bw ( KiB/s): min=95552, max=540216, per=9.49%, avg=195380.45, stdev=103005.98, samples=20 00:37:08.328 iops : min= 373, max= 2110, avg=762.90, stdev=402.26, samples=20 00:37:08.328 lat (msec) : 20=1.22%, 50=20.84%, 100=40.96%, 250=36.98% 00:37:08.328 cpu : usr=0.23%, sys=2.29%, ctx=1470, majf=0, minf=4097 00:37:08.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:37:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:08.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:08.328 issued rwts: total=7693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:08.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:08.328 job10: (groupid=0, jobs=1): err= 0: pid=102123: Wed May 15 01:03:09 2024 00:37:08.328 read: IOPS=557, BW=139MiB/s (146MB/s)(1416MiB/10154msec) 00:37:08.328 slat (usec): min=15, max=59395, avg=1721.36, stdev=5661.83 00:37:08.328 clat (msec): min=22, max=302, avg=112.75, stdev=26.56 00:37:08.328 lat (msec): min=23, max=302, avg=114.47, stdev=27.34 00:37:08.328 clat percentiles (msec): 00:37:08.328 | 1.00th=[ 42], 5.00th=[ 58], 10.00th=[ 78], 20.00th=[ 101], 00:37:08.328 | 30.00th=[ 110], 40.00th=[ 114], 50.00th=[ 118], 60.00th=[ 122], 00:37:08.328 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 140], 00:37:08.328 | 99.00th=[ 171], 99.50th=[ 251], 99.90th=[ 271], 99.95th=[ 292], 00:37:08.328 | 99.99th=[ 305] 00:37:08.328 bw ( KiB/s): min=113437, max=261109, per=6.96%, avg=143335.10, stdev=30167.33, samples=20 00:37:08.328 iops : min= 443, max= 1019, avg=559.55, stdev=117.74, samples=20 00:37:08.328 lat (msec) : 50=2.22%, 100=17.78%, 250=79.49%, 500=0.51% 00:37:08.328 cpu : usr=0.17%, sys=1.87%, ctx=1215, majf=0, minf=4097 00:37:08.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:37:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:08.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:08.328 issued rwts: total=5665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:08.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:08.328 00:37:08.328 Run status group 0 (all jobs): 00:37:08.328 READ: bw=2010MiB/s (2108MB/s), 139MiB/s-283MiB/s (146MB/s-297MB/s), io=19.9GiB (21.4GB), run=10017-10154msec 00:37:08.328 00:37:08.328 Disk stats (read/write): 00:37:08.328 nvme0n1: ios=11329/0, merge=0/0, ticks=1234862/0, in_queue=1234862, util=97.19% 00:37:08.328 nvme10n1: ios=11664/0, merge=0/0, ticks=1237634/0, in_queue=1237634, util=97.69% 00:37:08.328 nvme1n1: ios=14123/0, merge=0/0, ticks=1241680/0, in_queue=1241680, util=97.66% 00:37:08.328 nvme2n1: ios=13477/0, merge=0/0, ticks=1233650/0, in_queue=1233650, util=98.12% 00:37:08.328 nvme3n1: ios=17545/0, merge=0/0, ticks=1233304/0, in_queue=1233304, util=97.84% 00:37:08.328 nvme4n1: ios=22790/0, merge=0/0, ticks=1224730/0, in_queue=1224730, util=97.80% 00:37:08.328 nvme5n1: ios=12898/0, merge=0/0, ticks=1235471/0, in_queue=1235471, util=97.94% 00:37:08.328 nvme6n1: ios=13525/0, merge=0/0, ticks=1235152/0, in_queue=1235152, util=98.29% 00:37:08.328 nvme7n1: ios=18193/0, merge=0/0, ticks=1239014/0, in_queue=1239014, util=97.98% 00:37:08.328 nvme8n1: ios=15261/0, merge=0/0, ticks=1227444/0, in_queue=1227444, util=98.37% 00:37:08.328 nvme9n1: ios=11206/0, merge=0/0, ticks=1235089/0, in_queue=1235089, util=98.66% 00:37:08.329 01:03:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:37:08.329 [global] 00:37:08.329 thread=1 00:37:08.329 invalidate=1 00:37:08.329 rw=randwrite 00:37:08.329 time_based=1 00:37:08.329 runtime=10 00:37:08.329 ioengine=libaio 00:37:08.329 direct=1 00:37:08.329 bs=262144 00:37:08.329 iodepth=64 00:37:08.329 norandommap=1 00:37:08.329 numjobs=1 00:37:08.329 00:37:08.329 [job0] 00:37:08.329 filename=/dev/nvme0n1 00:37:08.329 [job1] 00:37:08.329 filename=/dev/nvme10n1 00:37:08.329 [job2] 00:37:08.329 filename=/dev/nvme1n1 00:37:08.329 [job3] 00:37:08.329 filename=/dev/nvme2n1 00:37:08.329 [job4] 00:37:08.329 filename=/dev/nvme3n1 00:37:08.329 [job5] 00:37:08.329 filename=/dev/nvme4n1 00:37:08.329 [job6] 00:37:08.329 filename=/dev/nvme5n1 00:37:08.329 [job7] 00:37:08.329 filename=/dev/nvme6n1 00:37:08.329 [job8] 00:37:08.329 filename=/dev/nvme7n1 00:37:08.329 [job9] 00:37:08.329 filename=/dev/nvme8n1 00:37:08.329 [job10] 00:37:08.329 filename=/dev/nvme9n1 00:37:08.329 Could not set queue depth (nvme0n1) 00:37:08.329 Could not set queue depth (nvme10n1) 00:37:08.329 Could not set queue depth (nvme1n1) 00:37:08.329 Could not set queue depth (nvme2n1) 00:37:08.329 Could not set queue depth (nvme3n1) 00:37:08.329 Could not set queue depth (nvme4n1) 00:37:08.329 Could not set queue depth (nvme5n1) 00:37:08.329 Could not set queue depth (nvme6n1) 00:37:08.329 Could not set queue depth (nvme7n1) 00:37:08.329 Could not set queue depth (nvme8n1) 00:37:08.329 Could not set queue depth (nvme9n1) 00:37:08.329 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:08.329 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:08.329 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:08.329 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:08.329 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:08.329 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:08.329 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:08.329 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:08.329 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:08.329 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:08.329 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:08.329 fio-3.35 00:37:08.329 Starting 11 threads 00:37:18.299 00:37:18.299 job0: (groupid=0, jobs=1): err= 0: pid=102321: Wed May 15 01:03:20 2024 00:37:18.299 write: IOPS=316, BW=79.1MiB/s (82.9MB/s)(805MiB/10173msec); 0 zone resets 00:37:18.299 slat (usec): min=24, max=82178, avg=3101.06, stdev=5821.17 00:37:18.299 clat (msec): min=3, max=374, avg=199.06, stdev=28.78 00:37:18.299 lat (msec): min=3, max=374, avg=202.16, stdev=28.58 00:37:18.299 clat percentiles (msec): 00:37:18.299 | 1.00th=[ 70], 5.00th=[ 169], 10.00th=[ 182], 20.00th=[ 190], 00:37:18.299 | 30.00th=[ 197], 40.00th=[ 199], 50.00th=[ 203], 60.00th=[ 205], 00:37:18.299 | 70.00th=[ 209], 80.00th=[ 213], 90.00th=[ 220], 95.00th=[ 224], 00:37:18.299 | 99.00th=[ 275], 99.50th=[ 321], 99.90th=[ 363], 99.95th=[ 376], 00:37:18.299 | 99.99th=[ 376] 00:37:18.299 bw ( KiB/s): min=74388, max=101376, per=7.17%, avg=80817.05, stdev=5495.43, samples=20 00:37:18.299 iops : min= 290, max= 396, avg=315.50, stdev=21.49, samples=20 00:37:18.299 lat (msec) : 4=0.12%, 20=0.03%, 50=0.50%, 100=0.93%, 250=97.17% 00:37:18.299 lat (msec) : 500=1.24% 00:37:18.299 cpu : usr=0.74%, sys=0.94%, ctx=3411, majf=0, minf=1 00:37:18.299 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:37:18.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:18.299 issued rwts: total=0,3219,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.299 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:18.299 job1: (groupid=0, jobs=1): err= 0: pid=102322: Wed May 15 01:03:20 2024 00:37:18.299 write: IOPS=336, BW=84.1MiB/s (88.2MB/s)(856MiB/10182msec); 0 zone resets 00:37:18.299 slat (usec): min=18, max=57331, avg=2907.74, stdev=5422.73 00:37:18.299 clat (msec): min=3, max=372, avg=187.25, stdev=43.05 00:37:18.299 lat (msec): min=3, max=372, avg=190.16, stdev=43.38 00:37:18.299 clat percentiles (msec): 00:37:18.299 | 1.00th=[ 23], 5.00th=[ 48], 10.00th=[ 174], 20.00th=[ 186], 00:37:18.299 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 201], 00:37:18.299 | 70.00th=[ 203], 80.00th=[ 205], 90.00th=[ 209], 95.00th=[ 213], 00:37:18.299 | 99.00th=[ 259], 99.50th=[ 321], 99.90th=[ 359], 99.95th=[ 372], 00:37:18.299 | 99.99th=[ 372] 00:37:18.299 bw ( KiB/s): min=79872, max=163187, per=7.63%, avg=86033.15, stdev=18219.54, samples=20 00:37:18.299 iops : min= 312, max= 637, avg=335.85, stdev=71.12, samples=20 00:37:18.299 lat (msec) : 4=0.12%, 10=0.06%, 20=0.47%, 50=5.34%, 100=0.55% 00:37:18.299 lat (msec) : 250=92.35%, 500=1.11% 00:37:18.299 cpu : usr=0.67%, sys=1.12%, ctx=2351, majf=0, minf=1 00:37:18.299 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:37:18.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:18.299 issued rwts: total=0,3425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.299 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:18.299 job2: (groupid=0, jobs=1): err= 0: pid=102334: Wed May 15 01:03:20 2024 00:37:18.299 write: IOPS=319, BW=79.9MiB/s (83.8MB/s)(813MiB/10173msec); 0 zone resets 00:37:18.299 slat (usec): min=23, max=60205, avg=3069.12, stdev=5580.73 00:37:18.299 clat (msec): min=14, max=370, avg=196.90, stdev=25.14 00:37:18.299 lat (msec): min=14, max=370, avg=199.97, stdev=24.89 00:37:18.299 clat percentiles (msec): 00:37:18.299 | 1.00th=[ 69], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 188], 00:37:18.299 | 30.00th=[ 194], 40.00th=[ 197], 50.00th=[ 199], 60.00th=[ 201], 00:37:18.299 | 70.00th=[ 205], 80.00th=[ 207], 90.00th=[ 211], 95.00th=[ 218], 00:37:18.299 | 99.00th=[ 259], 99.50th=[ 321], 99.90th=[ 359], 99.95th=[ 372], 00:37:18.299 | 99.99th=[ 372] 00:37:18.299 bw ( KiB/s): min=75776, max=87552, per=7.24%, avg=81664.00, stdev=2737.12, samples=20 00:37:18.299 iops : min= 296, max= 342, avg=319.00, stdev=10.69, samples=20 00:37:18.299 lat (msec) : 20=0.09%, 50=0.61%, 100=0.86%, 250=97.39%, 500=1.05% 00:37:18.299 cpu : usr=0.74%, sys=1.06%, ctx=3745, majf=0, minf=1 00:37:18.299 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:37:18.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:18.299 issued rwts: total=0,3253,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.299 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:18.299 job3: (groupid=0, jobs=1): err= 0: pid=102335: Wed May 15 01:03:20 2024 00:37:18.299 write: IOPS=410, BW=103MiB/s (108MB/s)(1040MiB/10135msec); 0 zone resets 00:37:18.299 slat (usec): min=18, max=36592, avg=2365.38, stdev=4152.52 00:37:18.299 clat (msec): min=8, max=289, avg=153.50, stdev=18.59 00:37:18.299 lat (msec): min=10, max=289, avg=155.86, stdev=18.44 00:37:18.299 clat percentiles (msec): 00:37:18.299 | 1.00th=[ 52], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 150], 00:37:18.299 | 30.00th=[ 153], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:37:18.299 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 163], 00:37:18.299 | 99.00th=[ 192], 99.50th=[ 241], 99.90th=[ 279], 99.95th=[ 279], 00:37:18.299 | 99.99th=[ 288] 00:37:18.299 bw ( KiB/s): min=92160, max=126464, per=9.30%, avg=104857.60, stdev=6230.08, samples=20 00:37:18.299 iops : min= 360, max= 494, avg=409.60, stdev=24.34, samples=20 00:37:18.299 lat (msec) : 10=0.02%, 20=0.22%, 50=0.72%, 100=0.94%, 250=97.76% 00:37:18.299 lat (msec) : 500=0.34% 00:37:18.299 cpu : usr=1.05%, sys=1.23%, ctx=6362, majf=0, minf=1 00:37:18.299 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:37:18.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:18.299 issued rwts: total=0,4159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.299 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:18.299 job4: (groupid=0, jobs=1): err= 0: pid=102336: Wed May 15 01:03:20 2024 00:37:18.299 write: IOPS=455, BW=114MiB/s (119MB/s)(1152MiB/10112msec); 0 zone resets 00:37:18.300 slat (usec): min=17, max=22886, avg=2165.16, stdev=3748.82 00:37:18.300 clat (msec): min=19, max=226, avg=138.20, stdev=20.79 00:37:18.300 lat (msec): min=19, max=226, avg=140.37, stdev=20.80 00:37:18.300 clat percentiles (msec): 00:37:18.300 | 1.00th=[ 95], 5.00th=[ 116], 10.00th=[ 117], 20.00th=[ 123], 00:37:18.300 | 30.00th=[ 124], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 150], 00:37:18.300 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 161], 95.00th=[ 163], 00:37:18.300 | 99.00th=[ 167], 99.50th=[ 182], 99.90th=[ 220], 99.95th=[ 220], 00:37:18.300 | 99.99th=[ 228] 00:37:18.300 bw ( KiB/s): min=100352, max=133632, per=10.32%, avg=116388.05, stdev=14963.86, samples=20 00:37:18.300 iops : min= 392, max= 522, avg=454.60, stdev=58.49, samples=20 00:37:18.300 lat (msec) : 20=0.07%, 50=0.39%, 100=0.59%, 250=98.96% 00:37:18.300 cpu : usr=1.08%, sys=1.33%, ctx=5299, majf=0, minf=1 00:37:18.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:37:18.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:18.300 issued rwts: total=0,4609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:18.300 job5: (groupid=0, jobs=1): err= 0: pid=102337: Wed May 15 01:03:20 2024 00:37:18.300 write: IOPS=331, BW=82.9MiB/s (86.9MB/s)(843MiB/10168msec); 0 zone resets 00:37:18.300 slat (usec): min=23, max=43691, avg=2922.91, stdev=5220.55 00:37:18.300 clat (msec): min=46, max=356, avg=190.09, stdev=22.66 00:37:18.300 lat (msec): min=46, max=356, avg=193.01, stdev=22.50 00:37:18.300 clat percentiles (msec): 00:37:18.300 | 1.00th=[ 92], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 182], 00:37:18.300 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:37:18.300 | 70.00th=[ 199], 80.00th=[ 201], 90.00th=[ 207], 95.00th=[ 209], 00:37:18.300 | 99.00th=[ 257], 99.50th=[ 309], 99.90th=[ 347], 99.95th=[ 359], 00:37:18.300 | 99.99th=[ 359] 00:37:18.300 bw ( KiB/s): min=79712, max=97280, per=7.51%, avg=84625.60, stdev=4482.18, samples=20 00:37:18.300 iops : min= 311, max= 380, avg=330.55, stdev=17.53, samples=20 00:37:18.300 lat (msec) : 50=0.12%, 100=0.98%, 250=97.89%, 500=1.01% 00:37:18.300 cpu : usr=0.81%, sys=0.96%, ctx=3520, majf=0, minf=1 00:37:18.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:37:18.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:18.300 issued rwts: total=0,3370,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:18.300 job6: (groupid=0, jobs=1): err= 0: pid=102338: Wed May 15 01:03:20 2024 00:37:18.300 write: IOPS=514, BW=129MiB/s (135MB/s)(1303MiB/10135msec); 0 zone resets 00:37:18.300 slat (usec): min=22, max=77016, avg=1881.13, stdev=3459.22 00:37:18.300 clat (msec): min=30, max=287, avg=122.56, stdev=27.50 00:37:18.300 lat (msec): min=33, max=287, avg=124.44, stdev=27.70 00:37:18.300 clat percentiles (msec): 00:37:18.300 | 1.00th=[ 92], 5.00th=[ 97], 10.00th=[ 99], 20.00th=[ 101], 00:37:18.300 | 30.00th=[ 104], 40.00th=[ 105], 50.00th=[ 106], 60.00th=[ 128], 00:37:18.300 | 70.00th=[ 146], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 159], 00:37:18.300 | 99.00th=[ 194], 99.50th=[ 228], 99.90th=[ 279], 99.95th=[ 279], 00:37:18.300 | 99.99th=[ 288] 00:37:18.300 bw ( KiB/s): min=104448, max=160768, per=11.69%, avg=131774.50, stdev=25508.39, samples=20 00:37:18.300 iops : min= 408, max= 628, avg=514.70, stdev=99.68, samples=20 00:37:18.300 lat (msec) : 50=0.23%, 100=17.70%, 250=81.80%, 500=0.27% 00:37:18.300 cpu : usr=1.32%, sys=1.43%, ctx=6144, majf=0, minf=1 00:37:18.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:37:18.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:18.300 issued rwts: total=0,5210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:18.300 job7: (groupid=0, jobs=1): err= 0: pid=102339: Wed May 15 01:03:20 2024 00:37:18.300 write: IOPS=315, BW=78.8MiB/s (82.6MB/s)(801MiB/10173msec); 0 zone resets 00:37:18.300 slat (usec): min=22, max=51339, avg=3114.76, stdev=5760.47 00:37:18.300 clat (msec): min=22, max=367, avg=199.95, stdev=25.57 00:37:18.300 lat (msec): min=22, max=367, avg=203.06, stdev=25.26 00:37:18.300 clat percentiles (msec): 00:37:18.300 | 1.00th=[ 80], 5.00th=[ 169], 10.00th=[ 184], 20.00th=[ 192], 00:37:18.300 | 30.00th=[ 197], 40.00th=[ 201], 50.00th=[ 203], 60.00th=[ 207], 00:37:18.300 | 70.00th=[ 209], 80.00th=[ 211], 90.00th=[ 215], 95.00th=[ 220], 00:37:18.300 | 99.00th=[ 268], 99.50th=[ 317], 99.90th=[ 355], 99.95th=[ 368], 00:37:18.300 | 99.99th=[ 368] 00:37:18.300 bw ( KiB/s): min=77824, max=92160, per=7.14%, avg=80465.70, stdev=3326.10, samples=20 00:37:18.300 iops : min= 304, max= 360, avg=314.05, stdev=13.01, samples=20 00:37:18.300 lat (msec) : 50=0.50%, 100=1.00%, 250=97.32%, 500=1.19% 00:37:18.300 cpu : usr=0.67%, sys=1.02%, ctx=2375, majf=0, minf=1 00:37:18.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:37:18.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:18.300 issued rwts: total=0,3205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:18.300 job8: (groupid=0, jobs=1): err= 0: pid=102340: Wed May 15 01:03:20 2024 00:37:18.300 write: IOPS=453, BW=113MiB/s (119MB/s)(1146MiB/10109msec); 0 zone resets 00:37:18.300 slat (usec): min=19, max=59432, avg=2174.90, stdev=3830.07 00:37:18.300 clat (msec): min=9, max=231, avg=138.88, stdev=21.13 00:37:18.300 lat (msec): min=9, max=231, avg=141.05, stdev=21.11 00:37:18.300 clat percentiles (msec): 00:37:18.300 | 1.00th=[ 114], 5.00th=[ 116], 10.00th=[ 117], 20.00th=[ 123], 00:37:18.300 | 30.00th=[ 124], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 150], 00:37:18.300 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 163], 00:37:18.300 | 99.00th=[ 192], 99.50th=[ 203], 99.90th=[ 224], 99.95th=[ 224], 00:37:18.300 | 99.99th=[ 232] 00:37:18.300 bw ( KiB/s): min=93184, max=133632, per=10.27%, avg=115763.20, stdev=15440.54, samples=20 00:37:18.300 iops : min= 364, max= 522, avg=452.20, stdev=60.31, samples=20 00:37:18.300 lat (msec) : 10=0.04%, 50=0.44%, 100=0.17%, 250=99.35% 00:37:18.300 cpu : usr=0.83%, sys=1.58%, ctx=5157, majf=0, minf=1 00:37:18.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:37:18.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:18.300 issued rwts: total=0,4585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:18.300 job9: (groupid=0, jobs=1): err= 0: pid=102341: Wed May 15 01:03:20 2024 00:37:18.300 write: IOPS=455, BW=114MiB/s (119MB/s)(1151MiB/10106msec); 0 zone resets 00:37:18.300 slat (usec): min=18, max=36349, avg=2168.57, stdev=3773.79 00:37:18.300 clat (msec): min=38, max=221, avg=138.32, stdev=19.74 00:37:18.300 lat (msec): min=38, max=221, avg=140.49, stdev=19.72 00:37:18.300 clat percentiles (msec): 00:37:18.300 | 1.00th=[ 112], 5.00th=[ 116], 10.00th=[ 117], 20.00th=[ 123], 00:37:18.300 | 30.00th=[ 124], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 150], 00:37:18.300 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 161], 95.00th=[ 163], 00:37:18.300 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 215], 99.95th=[ 215], 00:37:18.300 | 99.99th=[ 222] 00:37:18.300 bw ( KiB/s): min=100352, max=133632, per=10.31%, avg=116198.40, stdev=15054.68, samples=20 00:37:18.300 iops : min= 392, max= 522, avg=453.90, stdev=58.81, samples=20 00:37:18.300 lat (msec) : 50=0.17%, 100=0.63%, 250=99.20% 00:37:18.300 cpu : usr=0.87%, sys=1.39%, ctx=4292, majf=0, minf=1 00:37:18.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:37:18.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:18.300 issued rwts: total=0,4602,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:18.300 job10: (groupid=0, jobs=1): err= 0: pid=102342: Wed May 15 01:03:20 2024 00:37:18.300 write: IOPS=513, BW=128MiB/s (135MB/s)(1302MiB/10141msec); 0 zone resets 00:37:18.300 slat (usec): min=22, max=21971, avg=1916.63, stdev=3364.20 00:37:18.300 clat (msec): min=19, max=291, avg=122.70, stdev=28.29 00:37:18.300 lat (msec): min=19, max=291, avg=124.62, stdev=28.52 00:37:18.300 clat percentiles (msec): 00:37:18.300 | 1.00th=[ 86], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 102], 00:37:18.300 | 30.00th=[ 104], 40.00th=[ 105], 50.00th=[ 106], 60.00th=[ 136], 00:37:18.300 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 159], 00:37:18.300 | 99.00th=[ 192], 99.50th=[ 222], 99.90th=[ 279], 99.95th=[ 284], 00:37:18.300 | 99.99th=[ 292] 00:37:18.300 bw ( KiB/s): min=102400, max=162304, per=11.68%, avg=131660.80, stdev=25343.40, samples=20 00:37:18.300 iops : min= 400, max= 634, avg=514.30, stdev=99.00, samples=20 00:37:18.300 lat (msec) : 20=0.04%, 50=0.54%, 100=16.38%, 250=82.69%, 500=0.35% 00:37:18.300 cpu : usr=1.26%, sys=1.38%, ctx=5592, majf=0, minf=1 00:37:18.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:37:18.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:18.300 issued rwts: total=0,5206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:18.300 00:37:18.300 Run status group 0 (all jobs): 00:37:18.300 WRITE: bw=1101MiB/s (1155MB/s), 78.8MiB/s-129MiB/s (82.6MB/s-135MB/s), io=10.9GiB (11.8GB), run=10106-10182msec 00:37:18.300 00:37:18.300 Disk stats (read/write): 00:37:18.300 nvme0n1: ios=50/6321, merge=0/0, ticks=26/1210153, in_queue=1210179, util=97.85% 00:37:18.300 nvme10n1: ios=49/6728, merge=0/0, ticks=48/1211390, in_queue=1211438, util=98.21% 00:37:18.300 nvme1n1: ios=36/6377, merge=0/0, ticks=45/1209154, in_queue=1209199, util=98.09% 00:37:18.300 nvme2n1: ios=23/8182, merge=0/0, ticks=39/1212826, in_queue=1212865, util=98.02% 00:37:18.300 nvme3n1: ios=0/9081, merge=0/0, ticks=0/1214680, in_queue=1214680, util=98.06% 00:37:18.300 nvme4n1: ios=0/6607, merge=0/0, ticks=0/1210350, in_queue=1210350, util=98.24% 00:37:18.300 nvme5n1: ios=0/10281, merge=0/0, ticks=0/1212504, in_queue=1212504, util=98.28% 00:37:18.300 nvme6n1: ios=0/6289, merge=0/0, ticks=0/1210195, in_queue=1210195, util=98.53% 00:37:18.300 nvme7n1: ios=0/9043, merge=0/0, ticks=0/1215552, in_queue=1215552, util=98.76% 00:37:18.300 nvme8n1: ios=0/9060, merge=0/0, ticks=0/1213765, in_queue=1213765, util=98.75% 00:37:18.300 nvme9n1: ios=0/10280, merge=0/0, ticks=0/1213045, in_queue=1213045, util=98.92% 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:18.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK1 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:37:18.300 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK2 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:37:18.300 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK3 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:37:18.300 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK4 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:37:18.300 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK5 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:37:18.300 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:37:18.300 01:03:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK6 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:37:18.300 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK7 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:37:18.300 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:37:18.300 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK8 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:37:18.301 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK9 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:37:18.301 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK10 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:37:18.301 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK11 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:18.301 rmmod nvme_tcp 00:37:18.301 rmmod nvme_fabrics 00:37:18.301 rmmod nvme_keyring 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 101640 ']' 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 101640 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@947 -- # '[' -z 101640 ']' 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # kill -0 101640 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # uname 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 101640 00:37:18.301 killing process with pid 101640 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # echo 'killing process with pid 101640' 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # kill 101640 00:37:18.301 [2024-05-15 01:03:21.522059] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:18.301 01:03:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@971 -- # wait 101640 00:37:18.866 01:03:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:18.866 01:03:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:18.866 01:03:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:18.866 01:03:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:18.866 01:03:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:18.866 01:03:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.866 01:03:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:18.866 01:03:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:18.866 01:03:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:37:18.866 00:37:18.866 real 0m49.454s 00:37:18.866 user 2m48.983s 00:37:18.866 sys 0m22.512s 00:37:18.866 01:03:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # xtrace_disable 00:37:18.866 01:03:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:18.866 ************************************ 00:37:18.866 END TEST nvmf_multiconnection 00:37:18.866 ************************************ 00:37:18.866 01:03:22 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:37:18.866 01:03:22 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:37:18.866 01:03:22 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:37:18.866 01:03:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:18.866 ************************************ 00:37:18.866 START TEST nvmf_initiator_timeout 00:37:18.866 ************************************ 00:37:18.866 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:37:19.124 * Looking for test storage... 00:37:19.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:37:19.124 Cannot find device "nvmf_tgt_br" 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:37:19.124 Cannot find device "nvmf_tgt_br2" 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:37:19.124 Cannot find device "nvmf_tgt_br" 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:37:19.124 Cannot find device "nvmf_tgt_br2" 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:19.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:19.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:37:19.124 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:37:19.125 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:19.125 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:19.125 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:19.125 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:19.125 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:19.125 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:19.125 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:19.125 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:37:19.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:19.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:37:19.383 00:37:19.383 --- 10.0.0.2 ping statistics --- 00:37:19.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.383 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:37:19.383 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:19.383 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:37:19.383 00:37:19.383 --- 10.0.0.3 ping statistics --- 00:37:19.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.383 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:19.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:19.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:37:19.383 00:37:19.383 --- 10.0.0.1 ping statistics --- 00:37:19.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.383 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=102711 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 102711 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@828 -- # '[' -z 102711 ']' 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:19.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:37:19.383 01:03:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:19.383 [2024-05-15 01:03:22.598873] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:37:19.383 [2024-05-15 01:03:22.598963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:19.640 [2024-05-15 01:03:22.735903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:19.640 [2024-05-15 01:03:22.823759] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:19.640 [2024-05-15 01:03:22.823812] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:19.640 [2024-05-15 01:03:22.823824] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:19.640 [2024-05-15 01:03:22.823840] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:19.640 [2024-05-15 01:03:22.823847] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:19.640 [2024-05-15 01:03:22.824013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:19.640 [2024-05-15 01:03:22.824056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:19.640 [2024-05-15 01:03:22.825048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:19.640 [2024-05-15 01:03:22.825092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@861 -- # return 0 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:20.574 Malloc0 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:20.574 Delay0 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:20.574 [2024-05-15 01:03:23.665440] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:20.574 [2024-05-15 01:03:23.693400] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:20.574 [2024-05-15 01:03:23.693818] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:20.574 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:20.832 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:37:20.832 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local i=0 00:37:20.832 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:37:20.832 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:37:20.832 01:03:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # sleep 2 00:37:22.785 01:03:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:37:22.785 01:03:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:37:22.785 01:03:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:37:22.785 01:03:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:37:22.785 01:03:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:37:22.785 01:03:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # return 0 00:37:22.785 01:03:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=102793 00:37:22.785 01:03:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:37:22.785 01:03:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:37:22.785 [global] 00:37:22.785 thread=1 00:37:22.785 invalidate=1 00:37:22.785 rw=write 00:37:22.785 time_based=1 00:37:22.785 runtime=60 00:37:22.785 ioengine=libaio 00:37:22.785 direct=1 00:37:22.785 bs=4096 00:37:22.785 iodepth=1 00:37:22.785 norandommap=0 00:37:22.785 numjobs=1 00:37:22.785 00:37:22.785 verify_dump=1 00:37:22.785 verify_backlog=512 00:37:22.785 verify_state_save=0 00:37:22.785 do_verify=1 00:37:22.785 verify=crc32c-intel 00:37:22.785 [job0] 00:37:22.785 filename=/dev/nvme0n1 00:37:22.785 Could not set queue depth (nvme0n1) 00:37:22.785 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:22.785 fio-3.35 00:37:22.785 Starting 1 thread 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:26.068 true 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:26.068 true 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:26.068 true 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:26.068 true 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:26.068 01:03:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:37:29.352 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:37:29.352 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:29.352 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:29.352 true 00:37:29.352 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:29.353 true 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:29.353 true 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:29.353 true 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:37:29.353 01:03:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 102793 00:38:25.574 00:38:25.574 job0: (groupid=0, jobs=1): err= 0: pid=102814: Wed May 15 01:04:26 2024 00:38:25.574 read: IOPS=768, BW=3072KiB/s (3146kB/s)(180MiB/60000msec) 00:38:25.574 slat (usec): min=13, max=144, avg=15.95, stdev= 2.92 00:38:25.574 clat (usec): min=166, max=2497, avg=211.08, stdev=22.85 00:38:25.574 lat (usec): min=181, max=2522, avg=227.04, stdev=23.12 00:38:25.574 clat percentiles (usec): 00:38:25.574 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:38:25.574 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:38:25.574 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 239], 00:38:25.574 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 293], 99.95th=[ 330], 00:38:25.574 | 99.99th=[ 742] 00:38:25.574 write: IOPS=774, BW=3098KiB/s (3172kB/s)(182MiB/60000msec); 0 zone resets 00:38:25.574 slat (usec): min=19, max=11775, avg=23.53, stdev=70.04 00:38:25.574 clat (usec): min=102, max=40660k, avg=1039.01, stdev=188616.17 00:38:25.574 lat (usec): min=148, max=40660k, avg=1062.54, stdev=188616.17 00:38:25.574 clat percentiles (usec): 00:38:25.574 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:38:25.574 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:38:25.574 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:38:25.574 | 99.00th=[ 206], 99.50th=[ 217], 99.90th=[ 249], 99.95th=[ 285], 00:38:25.574 | 99.99th=[ 1401] 00:38:25.574 bw ( KiB/s): min= 3280, max=12288, per=100.00%, avg=9320.62, stdev=1714.79, samples=39 00:38:25.574 iops : min= 820, max= 3072, avg=2330.21, stdev=428.68, samples=39 00:38:25.574 lat (usec) : 250=99.23%, 500=0.75%, 750=0.01%, 1000=0.01% 00:38:25.574 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:38:25.575 cpu : usr=0.57%, sys=2.24%, ctx=92628, majf=0, minf=2 00:38:25.575 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:25.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.575 issued rwts: total=46080,46470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:25.575 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:25.575 00:38:25.575 Run status group 0 (all jobs): 00:38:25.575 READ: bw=3072KiB/s (3146kB/s), 3072KiB/s-3072KiB/s (3146kB/s-3146kB/s), io=180MiB (189MB), run=60000-60000msec 00:38:25.575 WRITE: bw=3098KiB/s (3172kB/s), 3098KiB/s-3098KiB/s (3172kB/s-3172kB/s), io=182MiB (190MB), run=60000-60000msec 00:38:25.575 00:38:25.575 Disk stats (read/write): 00:38:25.575 nvme0n1: ios=46206/46080, merge=0/0, ticks=10085/8037, in_queue=18122, util=99.87% 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:25.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # local i=0 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:25.575 nvmf hotplug test: fio successful as expected 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1228 -- # return 0 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:25.575 rmmod nvme_tcp 00:38:25.575 rmmod nvme_fabrics 00:38:25.575 rmmod nvme_keyring 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 102711 ']' 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 102711 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@947 -- # '[' -z 102711 ']' 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # kill -0 102711 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # uname 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 102711 00:38:25.575 killing process with pid 102711 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 102711' 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # kill 102711 00:38:25.575 [2024-05-15 01:04:26.352397] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # wait 102711 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:38:25.575 ************************************ 00:38:25.575 END TEST nvmf_initiator_timeout 00:38:25.575 ************************************ 00:38:25.575 00:38:25.575 real 1m4.535s 00:38:25.575 user 4m6.913s 00:38:25.575 sys 0m8.071s 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # xtrace_disable 00:38:25.575 01:04:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:25.575 01:04:26 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:38:25.575 01:04:26 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:38:25.575 01:04:26 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:38:25.575 01:04:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:25.575 01:04:26 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:38:25.575 01:04:26 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:38:25.575 01:04:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:25.575 01:04:26 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:38:25.575 01:04:26 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:38:25.575 01:04:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:38:25.575 01:04:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:38:25.575 01:04:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:25.575 ************************************ 00:38:25.575 START TEST nvmf_multicontroller 00:38:25.575 ************************************ 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:38:25.575 * Looking for test storage... 00:38:25.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.575 01:04:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:38:25.576 Cannot find device "nvmf_tgt_br" 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:38:25.576 Cannot find device "nvmf_tgt_br2" 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:38:25.576 Cannot find device "nvmf_tgt_br" 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:38:25.576 Cannot find device "nvmf_tgt_br2" 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:25.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:38:25.576 01:04:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:25.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:38:25.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:25.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:38:25.576 00:38:25.576 --- 10.0.0.2 ping statistics --- 00:38:25.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.576 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:38:25.576 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:25.576 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:38:25.576 00:38:25.576 --- 10.0.0.3 ping statistics --- 00:38:25.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.576 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:25.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:25.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:38:25.576 00:38:25.576 --- 10.0.0.1 ping statistics --- 00:38:25.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.576 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@721 -- # xtrace_disable 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.576 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=103627 00:38:25.577 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:25.577 01:04:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 103627 00:38:25.577 01:04:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 103627 ']' 00:38:25.577 01:04:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:25.577 01:04:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:38:25.577 01:04:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:25.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:25.577 01:04:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:38:25.577 01:04:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.577 [2024-05-15 01:04:27.281816] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:38:25.577 [2024-05-15 01:04:27.281909] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:25.577 [2024-05-15 01:04:27.418909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:25.577 [2024-05-15 01:04:27.498591] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:25.577 [2024-05-15 01:04:27.498668] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:25.577 [2024-05-15 01:04:27.498680] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:25.577 [2024-05-15 01:04:27.498689] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:25.577 [2024-05-15 01:04:27.498696] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:25.577 [2024-05-15 01:04:27.498850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:25.577 [2024-05-15 01:04:27.498946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:25.577 [2024-05-15 01:04:27.498949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@727 -- # xtrace_disable 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.577 [2024-05-15 01:04:28.353714] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.577 Malloc0 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.577 [2024-05-15 01:04:28.412803] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:38:25.577 [2024-05-15 01:04:28.413060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.577 [2024-05-15 01:04:28.420916] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.577 Malloc1 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=103679 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 103679 /var/tmp/bdevperf.sock 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 103679 ']' 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:38:25.577 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.835 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.836 NVMe0n1 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.836 1 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.836 2024/05/15 01:04:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:38:25.836 request: 00:38:25.836 { 00:38:25.836 "method": "bdev_nvme_attach_controller", 00:38:25.836 "params": { 00:38:25.836 "name": "NVMe0", 00:38:25.836 "trtype": "tcp", 00:38:25.836 "traddr": "10.0.0.2", 00:38:25.836 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:38:25.836 "hostaddr": "10.0.0.2", 00:38:25.836 "hostsvcid": "60000", 00:38:25.836 "adrfam": "ipv4", 00:38:25.836 "trsvcid": "4420", 00:38:25.836 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:38:25.836 } 00:38:25.836 } 00:38:25.836 Got JSON-RPC error response 00:38:25.836 GoRPCClient: error on JSON-RPC call 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.836 01:04:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.836 2024/05/15 01:04:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:38:25.836 request: 00:38:25.836 { 00:38:25.836 "method": "bdev_nvme_attach_controller", 00:38:25.836 "params": { 00:38:25.836 "name": "NVMe0", 00:38:25.836 "trtype": "tcp", 00:38:25.836 "traddr": "10.0.0.2", 00:38:25.836 "hostaddr": "10.0.0.2", 00:38:25.836 "hostsvcid": "60000", 00:38:25.836 "adrfam": "ipv4", 00:38:25.836 "trsvcid": "4420", 00:38:25.836 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:38:25.836 } 00:38:25.836 } 00:38:25.836 Got JSON-RPC error response 00:38:25.836 GoRPCClient: error on JSON-RPC call 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.836 2024/05/15 01:04:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:38:25.836 request: 00:38:25.836 { 00:38:25.836 "method": "bdev_nvme_attach_controller", 00:38:25.836 "params": { 00:38:25.836 "name": "NVMe0", 00:38:25.836 "trtype": "tcp", 00:38:25.836 "traddr": "10.0.0.2", 00:38:25.836 "hostaddr": "10.0.0.2", 00:38:25.836 "hostsvcid": "60000", 00:38:25.836 "adrfam": "ipv4", 00:38:25.836 "trsvcid": "4420", 00:38:25.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:25.836 "multipath": "disable" 00:38:25.836 } 00:38:25.836 } 00:38:25.836 Got JSON-RPC error response 00:38:25.836 GoRPCClient: error on JSON-RPC call 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.836 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.836 2024/05/15 01:04:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:38:25.836 request: 00:38:25.836 { 00:38:25.836 "method": "bdev_nvme_attach_controller", 00:38:25.836 "params": { 00:38:25.836 "name": "NVMe0", 00:38:25.836 "trtype": "tcp", 00:38:25.836 "traddr": "10.0.0.2", 00:38:25.836 "hostaddr": "10.0.0.2", 00:38:25.836 "hostsvcid": "60000", 00:38:25.836 "adrfam": "ipv4", 00:38:25.836 "trsvcid": "4420", 00:38:25.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:25.836 "multipath": "failover" 00:38:25.836 } 00:38:25.836 } 00:38:25.837 Got JSON-RPC error response 00:38:25.837 GoRPCClient: error on JSON-RPC call 00:38:25.837 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:25.837 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:38:25.837 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:25.837 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:25.837 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:25.837 01:04:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:25.837 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.837 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:25.837 00:38:25.837 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.837 01:04:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:25.837 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.837 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:26.095 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.095 01:04:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:38:26.095 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.095 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:26.095 00:38:26.095 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.095 01:04:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:26.095 01:04:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:38:26.095 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.095 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:26.095 01:04:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.095 01:04:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:38:26.095 01:04:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:27.468 0 00:38:27.468 01:04:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:38:27.468 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:27.468 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:27.468 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:27.468 01:04:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 103679 00:38:27.468 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 103679 ']' 00:38:27.468 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 103679 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 103679 00:38:27.469 killing process with pid 103679 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 103679' 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 103679 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 103679 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # sort -u 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # cat 00:38:27.469 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:38:27.469 [2024-05-15 01:04:28.530586] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:38:27.469 [2024-05-15 01:04:28.531186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103679 ] 00:38:27.469 [2024-05-15 01:04:28.669396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.469 [2024-05-15 01:04:28.758883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.469 [2024-05-15 01:04:29.187981] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 559f04c9-55e4-40be-a5cf-46673711b4b7 already exists 00:38:27.469 [2024-05-15 01:04:29.188052] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:559f04c9-55e4-40be-a5cf-46673711b4b7 alias for bdev NVMe1n1 00:38:27.469 [2024-05-15 01:04:29.188074] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:38:27.469 Running I/O for 1 seconds... 00:38:27.469 00:38:27.469 Latency(us) 00:38:27.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.469 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:38:27.469 NVMe0n1 : 1.00 19547.31 76.36 0.00 0.00 6532.43 3440.64 14775.39 00:38:27.469 =================================================================================================================== 00:38:27.469 Total : 19547.31 76.36 0.00 0.00 6532.43 3440.64 14775.39 00:38:27.469 Received shutdown signal, test time was about 1.000000 seconds 00:38:27.469 00:38:27.469 Latency(us) 00:38:27.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.469 =================================================================================================================== 00:38:27.469 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:27.469 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1615 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:27.469 rmmod nvme_tcp 00:38:27.469 rmmod nvme_fabrics 00:38:27.469 rmmod nvme_keyring 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 103627 ']' 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 103627 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 103627 ']' 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 103627 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 103627 00:38:27.469 killing process with pid 103627 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 103627' 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 103627 00:38:27.469 [2024-05-15 01:04:30.730553] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:38:27.469 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 103627 00:38:27.728 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:27.728 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:27.728 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:27.728 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:27.728 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:27.728 01:04:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.728 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:27.728 01:04:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.986 01:04:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:38:27.987 ************************************ 00:38:27.987 END TEST nvmf_multicontroller 00:38:27.987 ************************************ 00:38:27.987 00:38:27.987 real 0m4.299s 00:38:27.987 user 0m12.777s 00:38:27.987 sys 0m1.040s 00:38:27.987 01:04:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # xtrace_disable 00:38:27.987 01:04:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:27.987 01:04:31 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:38:27.987 01:04:31 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:38:27.987 01:04:31 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:38:27.987 01:04:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:27.987 ************************************ 00:38:27.987 START TEST nvmf_aer 00:38:27.987 ************************************ 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:38:27.987 * Looking for test storage... 00:38:27.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:38:27.987 Cannot find device "nvmf_tgt_br" 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:38:27.987 Cannot find device "nvmf_tgt_br2" 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:38:27.987 Cannot find device "nvmf_tgt_br" 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:38:27.987 Cannot find device "nvmf_tgt_br2" 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:38:27.987 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:28.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:28.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:38:28.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:28.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:38:28.247 00:38:28.247 --- 10.0.0.2 ping statistics --- 00:38:28.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.247 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:38:28.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:28.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:38:28.247 00:38:28.247 --- 10.0.0.3 ping statistics --- 00:38:28.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.247 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:28.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:28.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:38:28.247 00:38:28.247 --- 10.0.0.1 ping statistics --- 00:38:28.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.247 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:28.247 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:28.505 01:04:31 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:38:28.505 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:28.505 01:04:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@721 -- # xtrace_disable 00:38:28.505 01:04:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:28.506 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=103914 00:38:28.506 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:28.506 01:04:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 103914 00:38:28.506 01:04:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # '[' -z 103914 ']' 00:38:28.506 01:04:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:28.506 01:04:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local max_retries=100 00:38:28.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:28.506 01:04:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:28.506 01:04:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # xtrace_disable 00:38:28.506 01:04:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:28.506 [2024-05-15 01:04:31.611343] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:38:28.506 [2024-05-15 01:04:31.611450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:28.506 [2024-05-15 01:04:31.755338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:28.764 [2024-05-15 01:04:31.856065] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:28.764 [2024-05-15 01:04:31.856135] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:28.764 [2024-05-15 01:04:31.856161] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:28.764 [2024-05-15 01:04:31.856184] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:28.764 [2024-05-15 01:04:31.856193] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:28.764 [2024-05-15 01:04:31.856970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:28.764 [2024-05-15 01:04:31.857097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:28.764 [2024-05-15 01:04:31.857223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.764 [2024-05-15 01:04:31.857216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:29.331 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:38:29.331 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@861 -- # return 0 00:38:29.331 01:04:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:29.331 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@727 -- # xtrace_disable 00:38:29.331 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:29.591 [2024-05-15 01:04:32.643789] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:29.591 Malloc0 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:29.591 [2024-05-15 01:04:32.708804] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:38:29.591 [2024-05-15 01:04:32.709324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:29.591 [ 00:38:29.591 { 00:38:29.591 "allow_any_host": true, 00:38:29.591 "hosts": [], 00:38:29.591 "listen_addresses": [], 00:38:29.591 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:29.591 "subtype": "Discovery" 00:38:29.591 }, 00:38:29.591 { 00:38:29.591 "allow_any_host": true, 00:38:29.591 "hosts": [], 00:38:29.591 "listen_addresses": [ 00:38:29.591 { 00:38:29.591 "adrfam": "IPv4", 00:38:29.591 "traddr": "10.0.0.2", 00:38:29.591 "trsvcid": "4420", 00:38:29.591 "trtype": "TCP" 00:38:29.591 } 00:38:29.591 ], 00:38:29.591 "max_cntlid": 65519, 00:38:29.591 "max_namespaces": 2, 00:38:29.591 "min_cntlid": 1, 00:38:29.591 "model_number": "SPDK bdev Controller", 00:38:29.591 "namespaces": [ 00:38:29.591 { 00:38:29.591 "bdev_name": "Malloc0", 00:38:29.591 "name": "Malloc0", 00:38:29.591 "nguid": "C306D9D55DB4492C93F709129871D90A", 00:38:29.591 "nsid": 1, 00:38:29.591 "uuid": "c306d9d5-5db4-492c-93f7-09129871d90a" 00:38:29.591 } 00:38:29.591 ], 00:38:29.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:29.591 "serial_number": "SPDK00000000000001", 00:38:29.591 "subtype": "NVMe" 00:38:29.591 } 00:38:29.591 ] 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=103974 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # local i=0 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 0 -lt 200 ']' 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=1 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 1 -lt 200 ']' 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=2 00:38:29.591 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1273 -- # return 0 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:29.874 Malloc1 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.874 01:04:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:29.874 Asynchronous Event Request test 00:38:29.874 Attaching to 10.0.0.2 00:38:29.874 Attached to 10.0.0.2 00:38:29.874 Registering asynchronous event callbacks... 00:38:29.874 Starting namespace attribute notice tests for all controllers... 00:38:29.874 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:38:29.874 aer_cb - Changed Namespace 00:38:29.874 Cleaning up... 00:38:29.874 [ 00:38:29.874 { 00:38:29.874 "allow_any_host": true, 00:38:29.874 "hosts": [], 00:38:29.874 "listen_addresses": [], 00:38:29.874 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:29.874 "subtype": "Discovery" 00:38:29.874 }, 00:38:29.874 { 00:38:29.874 "allow_any_host": true, 00:38:29.874 "hosts": [], 00:38:29.874 "listen_addresses": [ 00:38:29.874 { 00:38:29.874 "adrfam": "IPv4", 00:38:29.874 "traddr": "10.0.0.2", 00:38:29.874 "trsvcid": "4420", 00:38:29.874 "trtype": "TCP" 00:38:29.874 } 00:38:29.874 ], 00:38:29.874 "max_cntlid": 65519, 00:38:29.874 "max_namespaces": 2, 00:38:29.874 "min_cntlid": 1, 00:38:29.874 "model_number": "SPDK bdev Controller", 00:38:29.874 "namespaces": [ 00:38:29.874 { 00:38:29.874 "bdev_name": "Malloc0", 00:38:29.874 "name": "Malloc0", 00:38:29.874 "nguid": "C306D9D55DB4492C93F709129871D90A", 00:38:29.874 "nsid": 1, 00:38:29.874 "uuid": "c306d9d5-5db4-492c-93f7-09129871d90a" 00:38:29.874 }, 00:38:29.874 { 00:38:29.874 "bdev_name": "Malloc1", 00:38:29.874 "name": "Malloc1", 00:38:29.874 "nguid": "78F67436CFD84A5F9FC7190881FF305B", 00:38:29.874 "nsid": 2, 00:38:29.874 "uuid": "78f67436-cfd8-4a5f-9fc7-190881ff305b" 00:38:29.874 } 00:38:29.874 ], 00:38:29.874 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:29.874 "serial_number": "SPDK00000000000001", 00:38:29.874 "subtype": "NVMe" 00:38:29.874 } 00:38:29.874 ] 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 103974 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:29.874 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:29.874 rmmod nvme_tcp 00:38:29.874 rmmod nvme_fabrics 00:38:30.133 rmmod nvme_keyring 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 103914 ']' 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 103914 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' -z 103914 ']' 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # kill -0 103914 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # uname 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 103914 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # echo 'killing process with pid 103914' 00:38:30.133 killing process with pid 103914 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # kill 103914 00:38:30.133 [2024-05-15 01:04:33.208113] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@971 -- # wait 103914 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:30.133 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:30.394 01:04:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:38:30.394 00:38:30.394 real 0m2.362s 00:38:30.394 user 0m6.436s 00:38:30.394 sys 0m0.665s 00:38:30.394 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # xtrace_disable 00:38:30.394 01:04:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:30.394 ************************************ 00:38:30.394 END TEST nvmf_aer 00:38:30.394 ************************************ 00:38:30.394 01:04:33 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:38:30.394 01:04:33 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:38:30.394 01:04:33 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:38:30.394 01:04:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:30.394 ************************************ 00:38:30.394 START TEST nvmf_async_init 00:38:30.394 ************************************ 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:38:30.394 * Looking for test storage... 00:38:30.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:30.394 01:04:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e6eef3fe6dc14869ba6708231d5d95b4 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:38:30.395 Cannot find device "nvmf_tgt_br" 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:38:30.395 Cannot find device "nvmf_tgt_br2" 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:38:30.395 Cannot find device "nvmf_tgt_br" 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:38:30.395 Cannot find device "nvmf_tgt_br2" 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:38:30.395 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:30.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:30.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:38:30.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:30.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:38:30.660 00:38:30.660 --- 10.0.0.2 ping statistics --- 00:38:30.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:30.660 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:38:30.660 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:30.660 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:38:30.660 00:38:30.660 --- 10.0.0.3 ping statistics --- 00:38:30.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:30.660 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:30.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:30.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:38:30.660 00:38:30.660 --- 10.0.0.1 ping statistics --- 00:38:30.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:30.660 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@721 -- # xtrace_disable 00:38:30.660 01:04:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.918 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=104145 00:38:30.918 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:38:30.918 01:04:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 104145 00:38:30.918 01:04:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # '[' -z 104145 ']' 00:38:30.918 01:04:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:30.918 01:04:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:38:30.918 01:04:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:30.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:30.918 01:04:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:38:30.918 01:04:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.918 [2024-05-15 01:04:33.992812] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:38:30.918 [2024-05-15 01:04:33.992916] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:30.918 [2024-05-15 01:04:34.133964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.176 [2024-05-15 01:04:34.229780] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:31.176 [2024-05-15 01:04:34.229830] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:31.176 [2024-05-15 01:04:34.229841] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:31.176 [2024-05-15 01:04:34.229850] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:31.176 [2024-05-15 01:04:34.229857] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:31.176 [2024-05-15 01:04:34.229887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.743 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:38:31.743 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@861 -- # return 0 00:38:31.743 01:04:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:31.743 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@727 -- # xtrace_disable 00:38:31.743 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.001 01:04:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:32.001 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:32.001 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.001 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.001 [2024-05-15 01:04:35.065791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.002 null0 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e6eef3fe6dc14869ba6708231d5d95b4 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.002 [2024-05-15 01:04:35.105765] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:38:32.002 [2024-05-15 01:04:35.106032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.002 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.261 nvme0n1 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.261 [ 00:38:32.261 { 00:38:32.261 "aliases": [ 00:38:32.261 "e6eef3fe-6dc1-4869-ba67-08231d5d95b4" 00:38:32.261 ], 00:38:32.261 "assigned_rate_limits": { 00:38:32.261 "r_mbytes_per_sec": 0, 00:38:32.261 "rw_ios_per_sec": 0, 00:38:32.261 "rw_mbytes_per_sec": 0, 00:38:32.261 "w_mbytes_per_sec": 0 00:38:32.261 }, 00:38:32.261 "block_size": 512, 00:38:32.261 "claimed": false, 00:38:32.261 "driver_specific": { 00:38:32.261 "mp_policy": "active_passive", 00:38:32.261 "nvme": [ 00:38:32.261 { 00:38:32.261 "ctrlr_data": { 00:38:32.261 "ana_reporting": false, 00:38:32.261 "cntlid": 1, 00:38:32.261 "firmware_revision": "24.05", 00:38:32.261 "model_number": "SPDK bdev Controller", 00:38:32.261 "multi_ctrlr": true, 00:38:32.261 "oacs": { 00:38:32.261 "firmware": 0, 00:38:32.261 "format": 0, 00:38:32.261 "ns_manage": 0, 00:38:32.261 "security": 0 00:38:32.261 }, 00:38:32.261 "serial_number": "00000000000000000000", 00:38:32.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:32.261 "vendor_id": "0x8086" 00:38:32.261 }, 00:38:32.261 "ns_data": { 00:38:32.261 "can_share": true, 00:38:32.261 "id": 1 00:38:32.261 }, 00:38:32.261 "trid": { 00:38:32.261 "adrfam": "IPv4", 00:38:32.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:32.261 "traddr": "10.0.0.2", 00:38:32.261 "trsvcid": "4420", 00:38:32.261 "trtype": "TCP" 00:38:32.261 }, 00:38:32.261 "vs": { 00:38:32.261 "nvme_version": "1.3" 00:38:32.261 } 00:38:32.261 } 00:38:32.261 ] 00:38:32.261 }, 00:38:32.261 "memory_domains": [ 00:38:32.261 { 00:38:32.261 "dma_device_id": "system", 00:38:32.261 "dma_device_type": 1 00:38:32.261 } 00:38:32.261 ], 00:38:32.261 "name": "nvme0n1", 00:38:32.261 "num_blocks": 2097152, 00:38:32.261 "product_name": "NVMe disk", 00:38:32.261 "supported_io_types": { 00:38:32.261 "abort": true, 00:38:32.261 "compare": true, 00:38:32.261 "compare_and_write": true, 00:38:32.261 "flush": true, 00:38:32.261 "nvme_admin": true, 00:38:32.261 "nvme_io": true, 00:38:32.261 "read": true, 00:38:32.261 "reset": true, 00:38:32.261 "unmap": false, 00:38:32.261 "write": true, 00:38:32.261 "write_zeroes": true 00:38:32.261 }, 00:38:32.261 "uuid": "e6eef3fe-6dc1-4869-ba67-08231d5d95b4", 00:38:32.261 "zoned": false 00:38:32.261 } 00:38:32.261 ] 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.261 [2024-05-15 01:04:35.369982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:38:32.261 [2024-05-15 01:04:35.370112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5f230 (9): Bad file descriptor 00:38:32.261 [2024-05-15 01:04:35.501813] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.261 [ 00:38:32.261 { 00:38:32.261 "aliases": [ 00:38:32.261 "e6eef3fe-6dc1-4869-ba67-08231d5d95b4" 00:38:32.261 ], 00:38:32.261 "assigned_rate_limits": { 00:38:32.261 "r_mbytes_per_sec": 0, 00:38:32.261 "rw_ios_per_sec": 0, 00:38:32.261 "rw_mbytes_per_sec": 0, 00:38:32.261 "w_mbytes_per_sec": 0 00:38:32.261 }, 00:38:32.261 "block_size": 512, 00:38:32.261 "claimed": false, 00:38:32.261 "driver_specific": { 00:38:32.261 "mp_policy": "active_passive", 00:38:32.261 "nvme": [ 00:38:32.261 { 00:38:32.261 "ctrlr_data": { 00:38:32.261 "ana_reporting": false, 00:38:32.261 "cntlid": 2, 00:38:32.261 "firmware_revision": "24.05", 00:38:32.261 "model_number": "SPDK bdev Controller", 00:38:32.261 "multi_ctrlr": true, 00:38:32.261 "oacs": { 00:38:32.261 "firmware": 0, 00:38:32.261 "format": 0, 00:38:32.261 "ns_manage": 0, 00:38:32.261 "security": 0 00:38:32.261 }, 00:38:32.261 "serial_number": "00000000000000000000", 00:38:32.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:32.261 "vendor_id": "0x8086" 00:38:32.261 }, 00:38:32.261 "ns_data": { 00:38:32.261 "can_share": true, 00:38:32.261 "id": 1 00:38:32.261 }, 00:38:32.261 "trid": { 00:38:32.261 "adrfam": "IPv4", 00:38:32.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:32.261 "traddr": "10.0.0.2", 00:38:32.261 "trsvcid": "4420", 00:38:32.261 "trtype": "TCP" 00:38:32.261 }, 00:38:32.261 "vs": { 00:38:32.261 "nvme_version": "1.3" 00:38:32.261 } 00:38:32.261 } 00:38:32.261 ] 00:38:32.261 }, 00:38:32.261 "memory_domains": [ 00:38:32.261 { 00:38:32.261 "dma_device_id": "system", 00:38:32.261 "dma_device_type": 1 00:38:32.261 } 00:38:32.261 ], 00:38:32.261 "name": "nvme0n1", 00:38:32.261 "num_blocks": 2097152, 00:38:32.261 "product_name": "NVMe disk", 00:38:32.261 "supported_io_types": { 00:38:32.261 "abort": true, 00:38:32.261 "compare": true, 00:38:32.261 "compare_and_write": true, 00:38:32.261 "flush": true, 00:38:32.261 "nvme_admin": true, 00:38:32.261 "nvme_io": true, 00:38:32.261 "read": true, 00:38:32.261 "reset": true, 00:38:32.261 "unmap": false, 00:38:32.261 "write": true, 00:38:32.261 "write_zeroes": true 00:38:32.261 }, 00:38:32.261 "uuid": "e6eef3fe-6dc1-4869-ba67-08231d5d95b4", 00:38:32.261 "zoned": false 00:38:32.261 } 00:38:32.261 ] 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.wP2hu5pAQT 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:38:32.261 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.wP2hu5pAQT 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.520 [2024-05-15 01:04:35.562233] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:32.520 [2024-05-15 01:04:35.562403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wP2hu5pAQT 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.520 [2024-05-15 01:04:35.570223] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wP2hu5pAQT 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.520 [2024-05-15 01:04:35.578232] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:32.520 [2024-05-15 01:04:35.578320] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:38:32.520 nvme0n1 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.520 [ 00:38:32.520 { 00:38:32.520 "aliases": [ 00:38:32.520 "e6eef3fe-6dc1-4869-ba67-08231d5d95b4" 00:38:32.520 ], 00:38:32.520 "assigned_rate_limits": { 00:38:32.520 "r_mbytes_per_sec": 0, 00:38:32.520 "rw_ios_per_sec": 0, 00:38:32.520 "rw_mbytes_per_sec": 0, 00:38:32.520 "w_mbytes_per_sec": 0 00:38:32.520 }, 00:38:32.520 "block_size": 512, 00:38:32.520 "claimed": false, 00:38:32.520 "driver_specific": { 00:38:32.520 "mp_policy": "active_passive", 00:38:32.520 "nvme": [ 00:38:32.520 { 00:38:32.520 "ctrlr_data": { 00:38:32.520 "ana_reporting": false, 00:38:32.520 "cntlid": 3, 00:38:32.520 "firmware_revision": "24.05", 00:38:32.520 "model_number": "SPDK bdev Controller", 00:38:32.520 "multi_ctrlr": true, 00:38:32.520 "oacs": { 00:38:32.520 "firmware": 0, 00:38:32.520 "format": 0, 00:38:32.520 "ns_manage": 0, 00:38:32.520 "security": 0 00:38:32.520 }, 00:38:32.520 "serial_number": "00000000000000000000", 00:38:32.520 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:32.520 "vendor_id": "0x8086" 00:38:32.520 }, 00:38:32.520 "ns_data": { 00:38:32.520 "can_share": true, 00:38:32.520 "id": 1 00:38:32.520 }, 00:38:32.520 "trid": { 00:38:32.520 "adrfam": "IPv4", 00:38:32.520 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:32.520 "traddr": "10.0.0.2", 00:38:32.520 "trsvcid": "4421", 00:38:32.520 "trtype": "TCP" 00:38:32.520 }, 00:38:32.520 "vs": { 00:38:32.520 "nvme_version": "1.3" 00:38:32.520 } 00:38:32.520 } 00:38:32.520 ] 00:38:32.520 }, 00:38:32.520 "memory_domains": [ 00:38:32.520 { 00:38:32.520 "dma_device_id": "system", 00:38:32.520 "dma_device_type": 1 00:38:32.520 } 00:38:32.520 ], 00:38:32.520 "name": "nvme0n1", 00:38:32.520 "num_blocks": 2097152, 00:38:32.520 "product_name": "NVMe disk", 00:38:32.520 "supported_io_types": { 00:38:32.520 "abort": true, 00:38:32.520 "compare": true, 00:38:32.520 "compare_and_write": true, 00:38:32.520 "flush": true, 00:38:32.520 "nvme_admin": true, 00:38:32.520 "nvme_io": true, 00:38:32.520 "read": true, 00:38:32.520 "reset": true, 00:38:32.520 "unmap": false, 00:38:32.520 "write": true, 00:38:32.520 "write_zeroes": true 00:38:32.520 }, 00:38:32.520 "uuid": "e6eef3fe-6dc1-4869-ba67-08231d5d95b4", 00:38:32.520 "zoned": false 00:38:32.520 } 00:38:32.520 ] 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.520 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.wP2hu5pAQT 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:32.521 rmmod nvme_tcp 00:38:32.521 rmmod nvme_fabrics 00:38:32.521 rmmod nvme_keyring 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 104145 ']' 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 104145 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' -z 104145 ']' 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # kill -0 104145 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # uname 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:38:32.521 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 104145 00:38:32.778 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:38:32.778 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:38:32.778 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 104145' 00:38:32.778 killing process with pid 104145 00:38:32.778 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # kill 104145 00:38:32.778 [2024-05-15 01:04:35.827612] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:38:32.778 [2024-05-15 01:04:35.827653] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:38:32.778 [2024-05-15 01:04:35.827665] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:38:32.778 01:04:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@971 -- # wait 104145 00:38:32.778 01:04:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:32.778 01:04:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:32.778 01:04:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:32.778 01:04:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:32.778 01:04:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:32.778 01:04:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:32.778 01:04:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:32.778 01:04:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:32.778 01:04:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:38:32.778 00:38:32.778 real 0m2.561s 00:38:32.778 user 0m2.445s 00:38:32.778 sys 0m0.610s 00:38:32.778 01:04:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:38:32.778 01:04:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.778 ************************************ 00:38:32.778 END TEST nvmf_async_init 00:38:32.778 ************************************ 00:38:33.036 01:04:36 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:38:33.036 01:04:36 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:38:33.036 01:04:36 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:38:33.036 01:04:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:33.036 ************************************ 00:38:33.036 START TEST dma 00:38:33.036 ************************************ 00:38:33.036 01:04:36 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:38:33.036 * Looking for test storage... 00:38:33.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:33.036 01:04:36 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:33.036 01:04:36 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:33.036 01:04:36 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:33.036 01:04:36 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:33.036 01:04:36 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.036 01:04:36 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.036 01:04:36 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.036 01:04:36 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:38:33.036 01:04:36 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:33.036 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:33.037 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:33.037 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:33.037 01:04:36 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:33.037 01:04:36 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:38:33.037 01:04:36 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:38:33.037 00:38:33.037 real 0m0.099s 00:38:33.037 user 0m0.051s 00:38:33.037 sys 0m0.052s 00:38:33.037 01:04:36 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # xtrace_disable 00:38:33.037 ************************************ 00:38:33.037 END TEST dma 00:38:33.037 ************************************ 00:38:33.037 01:04:36 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:38:33.037 01:04:36 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:38:33.037 01:04:36 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:38:33.037 01:04:36 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:38:33.037 01:04:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:33.037 ************************************ 00:38:33.037 START TEST nvmf_identify 00:38:33.037 ************************************ 00:38:33.037 01:04:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:38:33.037 * Looking for test storage... 00:38:33.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:38:33.296 Cannot find device "nvmf_tgt_br" 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:38:33.296 Cannot find device "nvmf_tgt_br2" 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:38:33.296 Cannot find device "nvmf_tgt_br" 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:38:33.296 Cannot find device "nvmf_tgt_br2" 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:33.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:33.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:33.296 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:33.554 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:33.554 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:38:33.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:33.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:38:33.555 00:38:33.555 --- 10.0.0.2 ping statistics --- 00:38:33.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.555 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:38:33.555 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:33.555 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:38:33.555 00:38:33.555 --- 10.0.0.3 ping statistics --- 00:38:33.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.555 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:33.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:33.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:38:33.555 00:38:33.555 --- 10.0.0.1 ping statistics --- 00:38:33.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.555 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@721 -- # xtrace_disable 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=104406 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 104406 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # '[' -z 104406 ']' 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local max_retries=100 00:38:33.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # xtrace_disable 00:38:33.555 01:04:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:33.813 [2024-05-15 01:04:36.844520] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:38:33.813 [2024-05-15 01:04:36.845194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:33.813 [2024-05-15 01:04:36.988549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:33.813 [2024-05-15 01:04:37.081215] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:33.813 [2024-05-15 01:04:37.081278] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:33.813 [2024-05-15 01:04:37.081290] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:33.813 [2024-05-15 01:04:37.081299] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:33.813 [2024-05-15 01:04:37.081306] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:33.813 [2024-05-15 01:04:37.081464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:33.813 [2024-05-15 01:04:37.081567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:33.813 [2024-05-15 01:04:37.082071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:33.813 [2024-05-15 01:04:37.082118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@861 -- # return 0 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:34.749 [2024-05-15 01:04:37.835923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@727 -- # xtrace_disable 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:34.749 Malloc0 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:34.749 [2024-05-15 01:04:37.945236] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:38:34.749 [2024-05-15 01:04:37.945478] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:34.749 [ 00:38:34.749 { 00:38:34.749 "allow_any_host": true, 00:38:34.749 "hosts": [], 00:38:34.749 "listen_addresses": [ 00:38:34.749 { 00:38:34.749 "adrfam": "IPv4", 00:38:34.749 "traddr": "10.0.0.2", 00:38:34.749 "trsvcid": "4420", 00:38:34.749 "trtype": "TCP" 00:38:34.749 } 00:38:34.749 ], 00:38:34.749 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:34.749 "subtype": "Discovery" 00:38:34.749 }, 00:38:34.749 { 00:38:34.749 "allow_any_host": true, 00:38:34.749 "hosts": [], 00:38:34.749 "listen_addresses": [ 00:38:34.749 { 00:38:34.749 "adrfam": "IPv4", 00:38:34.749 "traddr": "10.0.0.2", 00:38:34.749 "trsvcid": "4420", 00:38:34.749 "trtype": "TCP" 00:38:34.749 } 00:38:34.749 ], 00:38:34.749 "max_cntlid": 65519, 00:38:34.749 "max_namespaces": 32, 00:38:34.749 "min_cntlid": 1, 00:38:34.749 "model_number": "SPDK bdev Controller", 00:38:34.749 "namespaces": [ 00:38:34.749 { 00:38:34.749 "bdev_name": "Malloc0", 00:38:34.749 "eui64": "ABCDEF0123456789", 00:38:34.749 "name": "Malloc0", 00:38:34.749 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:38:34.749 "nsid": 1, 00:38:34.749 "uuid": "47594db8-ba26-46cb-be9b-3f9ec9fe6fba" 00:38:34.749 } 00:38:34.749 ], 00:38:34.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:34.749 "serial_number": "SPDK00000000000001", 00:38:34.749 "subtype": "NVMe" 00:38:34.749 } 00:38:34.749 ] 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.749 01:04:37 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:38:34.749 [2024-05-15 01:04:37.990823] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:38:34.749 [2024-05-15 01:04:37.990868] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104466 ] 00:38:35.010 [2024-05-15 01:04:38.129030] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:38:35.010 [2024-05-15 01:04:38.129112] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:38:35.010 [2024-05-15 01:04:38.129119] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:38:35.010 [2024-05-15 01:04:38.129137] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:38:35.010 [2024-05-15 01:04:38.129148] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:38:35.010 [2024-05-15 01:04:38.129317] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:38:35.010 [2024-05-15 01:04:38.129373] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fac580 0 00:38:35.010 [2024-05-15 01:04:38.136617] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:38:35.010 [2024-05-15 01:04:38.136641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:38:35.010 [2024-05-15 01:04:38.136648] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:38:35.010 [2024-05-15 01:04:38.136652] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:38:35.010 [2024-05-15 01:04:38.136704] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.010 [2024-05-15 01:04:38.136713] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.010 [2024-05-15 01:04:38.136717] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fac580) 00:38:35.010 [2024-05-15 01:04:38.136733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:38:35.010 [2024-05-15 01:04:38.136766] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff86c0, cid 0, qid 0 00:38:35.010 [2024-05-15 01:04:38.144619] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.010 [2024-05-15 01:04:38.144639] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.010 [2024-05-15 01:04:38.144644] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.010 [2024-05-15 01:04:38.144650] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff86c0) on tqpair=0x1fac580 00:38:35.010 [2024-05-15 01:04:38.144666] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:38:35.010 [2024-05-15 01:04:38.144675] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:38:35.010 [2024-05-15 01:04:38.144682] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:38:35.010 [2024-05-15 01:04:38.144699] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.010 [2024-05-15 01:04:38.144705] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.010 [2024-05-15 01:04:38.144709] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fac580) 00:38:35.010 [2024-05-15 01:04:38.144719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.010 [2024-05-15 01:04:38.144747] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff86c0, cid 0, qid 0 00:38:35.010 [2024-05-15 01:04:38.144823] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.010 [2024-05-15 01:04:38.144831] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.010 [2024-05-15 01:04:38.144835] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.144840] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff86c0) on tqpair=0x1fac580 00:38:35.011 [2024-05-15 01:04:38.144849] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:38:35.011 [2024-05-15 01:04:38.144857] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:38:35.011 [2024-05-15 01:04:38.144866] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.144871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.144876] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.144884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.011 [2024-05-15 01:04:38.144905] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff86c0, cid 0, qid 0 00:38:35.011 [2024-05-15 01:04:38.144961] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.011 [2024-05-15 01:04:38.144968] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.011 [2024-05-15 01:04:38.144973] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.144977] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff86c0) on tqpair=0x1fac580 00:38:35.011 [2024-05-15 01:04:38.144985] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:38:35.011 [2024-05-15 01:04:38.144994] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:38:35.011 [2024-05-15 01:04:38.145003] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145008] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145012] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.145020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.011 [2024-05-15 01:04:38.145040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff86c0, cid 0, qid 0 00:38:35.011 [2024-05-15 01:04:38.145095] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.011 [2024-05-15 01:04:38.145103] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.011 [2024-05-15 01:04:38.145108] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145112] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff86c0) on tqpair=0x1fac580 00:38:35.011 [2024-05-15 01:04:38.145120] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:38:35.011 [2024-05-15 01:04:38.145131] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145136] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145141] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.145148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.011 [2024-05-15 01:04:38.145168] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff86c0, cid 0, qid 0 00:38:35.011 [2024-05-15 01:04:38.145223] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.011 [2024-05-15 01:04:38.145236] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.011 [2024-05-15 01:04:38.145241] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145246] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff86c0) on tqpair=0x1fac580 00:38:35.011 [2024-05-15 01:04:38.145253] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:38:35.011 [2024-05-15 01:04:38.145259] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:38:35.011 [2024-05-15 01:04:38.145278] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:38:35.011 [2024-05-15 01:04:38.145384] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:38:35.011 [2024-05-15 01:04:38.145397] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:38:35.011 [2024-05-15 01:04:38.145409] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145414] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145418] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.145426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.011 [2024-05-15 01:04:38.145448] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff86c0, cid 0, qid 0 00:38:35.011 [2024-05-15 01:04:38.145504] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.011 [2024-05-15 01:04:38.145512] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.011 [2024-05-15 01:04:38.145517] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145521] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff86c0) on tqpair=0x1fac580 00:38:35.011 [2024-05-15 01:04:38.145529] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:38:35.011 [2024-05-15 01:04:38.145540] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145545] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145549] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.145557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.011 [2024-05-15 01:04:38.145577] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff86c0, cid 0, qid 0 00:38:35.011 [2024-05-15 01:04:38.145649] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.011 [2024-05-15 01:04:38.145658] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.011 [2024-05-15 01:04:38.145663] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145667] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff86c0) on tqpair=0x1fac580 00:38:35.011 [2024-05-15 01:04:38.145674] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:38:35.011 [2024-05-15 01:04:38.145680] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:38:35.011 [2024-05-15 01:04:38.145689] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:38:35.011 [2024-05-15 01:04:38.145705] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:38:35.011 [2024-05-15 01:04:38.145716] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145721] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.145729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.011 [2024-05-15 01:04:38.145752] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff86c0, cid 0, qid 0 00:38:35.011 [2024-05-15 01:04:38.145855] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:35.011 [2024-05-15 01:04:38.145867] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:35.011 [2024-05-15 01:04:38.145872] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145877] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fac580): datao=0, datal=4096, cccid=0 00:38:35.011 [2024-05-15 01:04:38.145882] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ff86c0) on tqpair(0x1fac580): expected_datao=0, payload_size=4096 00:38:35.011 [2024-05-15 01:04:38.145888] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145897] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145902] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145912] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.011 [2024-05-15 01:04:38.145919] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.011 [2024-05-15 01:04:38.145923] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145928] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff86c0) on tqpair=0x1fac580 00:38:35.011 [2024-05-15 01:04:38.145938] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:38:35.011 [2024-05-15 01:04:38.145944] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:38:35.011 [2024-05-15 01:04:38.145949] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:38:35.011 [2024-05-15 01:04:38.145956] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:38:35.011 [2024-05-15 01:04:38.145962] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:38:35.011 [2024-05-15 01:04:38.145967] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:38:35.011 [2024-05-15 01:04:38.145978] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:38:35.011 [2024-05-15 01:04:38.145991] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.145997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.146009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:35.011 [2024-05-15 01:04:38.146032] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff86c0, cid 0, qid 0 00:38:35.011 [2024-05-15 01:04:38.146095] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.011 [2024-05-15 01:04:38.146103] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.011 [2024-05-15 01:04:38.146107] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146112] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff86c0) on tqpair=0x1fac580 00:38:35.011 [2024-05-15 01:04:38.146122] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146127] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146131] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.146139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:35.011 [2024-05-15 01:04:38.146146] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146150] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146155] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.146162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:35.011 [2024-05-15 01:04:38.146169] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146173] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146178] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.146184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:35.011 [2024-05-15 01:04:38.146191] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146196] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146200] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.146207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:35.011 [2024-05-15 01:04:38.146212] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:38:35.011 [2024-05-15 01:04:38.146226] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:38:35.011 [2024-05-15 01:04:38.146235] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146240] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.146248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.011 [2024-05-15 01:04:38.146270] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff86c0, cid 0, qid 0 00:38:35.011 [2024-05-15 01:04:38.146278] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8820, cid 1, qid 0 00:38:35.011 [2024-05-15 01:04:38.146284] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8980, cid 2, qid 0 00:38:35.011 [2024-05-15 01:04:38.146289] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.011 [2024-05-15 01:04:38.146295] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8c40, cid 4, qid 0 00:38:35.011 [2024-05-15 01:04:38.146392] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.011 [2024-05-15 01:04:38.146400] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.011 [2024-05-15 01:04:38.146404] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146409] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8c40) on tqpair=0x1fac580 00:38:35.011 [2024-05-15 01:04:38.146416] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:38:35.011 [2024-05-15 01:04:38.146422] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:38:35.011 [2024-05-15 01:04:38.146434] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146439] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.146447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.011 [2024-05-15 01:04:38.146467] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8c40, cid 4, qid 0 00:38:35.011 [2024-05-15 01:04:38.146535] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:35.011 [2024-05-15 01:04:38.146542] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:35.011 [2024-05-15 01:04:38.146547] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146551] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fac580): datao=0, datal=4096, cccid=4 00:38:35.011 [2024-05-15 01:04:38.146557] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ff8c40) on tqpair(0x1fac580): expected_datao=0, payload_size=4096 00:38:35.011 [2024-05-15 01:04:38.146562] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146570] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146575] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146584] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.011 [2024-05-15 01:04:38.146591] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.011 [2024-05-15 01:04:38.146615] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146622] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8c40) on tqpair=0x1fac580 00:38:35.011 [2024-05-15 01:04:38.146638] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:38:35.011 [2024-05-15 01:04:38.146679] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146687] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.146695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.011 [2024-05-15 01:04:38.146704] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146708] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146713] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.146720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:38:35.011 [2024-05-15 01:04:38.146749] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8c40, cid 4, qid 0 00:38:35.011 [2024-05-15 01:04:38.146757] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8da0, cid 5, qid 0 00:38:35.011 [2024-05-15 01:04:38.146857] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:35.011 [2024-05-15 01:04:38.146867] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:35.011 [2024-05-15 01:04:38.146871] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146876] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fac580): datao=0, datal=1024, cccid=4 00:38:35.011 [2024-05-15 01:04:38.146881] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ff8c40) on tqpair(0x1fac580): expected_datao=0, payload_size=1024 00:38:35.011 [2024-05-15 01:04:38.146887] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146894] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146899] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146906] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.011 [2024-05-15 01:04:38.146912] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.011 [2024-05-15 01:04:38.146917] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.146921] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8da0) on tqpair=0x1fac580 00:38:35.011 [2024-05-15 01:04:38.192624] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.011 [2024-05-15 01:04:38.192654] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.011 [2024-05-15 01:04:38.192661] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.192667] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8c40) on tqpair=0x1fac580 00:38:35.011 [2024-05-15 01:04:38.192695] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.011 [2024-05-15 01:04:38.192701] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fac580) 00:38:35.011 [2024-05-15 01:04:38.192716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.012 [2024-05-15 01:04:38.192756] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8c40, cid 4, qid 0 00:38:35.012 [2024-05-15 01:04:38.192885] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:35.012 [2024-05-15 01:04:38.192893] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:35.012 [2024-05-15 01:04:38.192898] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.192903] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fac580): datao=0, datal=3072, cccid=4 00:38:35.012 [2024-05-15 01:04:38.192909] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ff8c40) on tqpair(0x1fac580): expected_datao=0, payload_size=3072 00:38:35.012 [2024-05-15 01:04:38.192914] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.192924] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.192929] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.192939] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.012 [2024-05-15 01:04:38.192947] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.012 [2024-05-15 01:04:38.192951] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.192956] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8c40) on tqpair=0x1fac580 00:38:35.012 [2024-05-15 01:04:38.192968] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.192973] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fac580) 00:38:35.012 [2024-05-15 01:04:38.192981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.012 [2024-05-15 01:04:38.193008] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8c40, cid 4, qid 0 00:38:35.012 [2024-05-15 01:04:38.193085] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:35.012 [2024-05-15 01:04:38.193093] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:35.012 [2024-05-15 01:04:38.193097] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.193102] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fac580): datao=0, datal=8, cccid=4 00:38:35.012 [2024-05-15 01:04:38.193107] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ff8c40) on tqpair(0x1fac580): expected_datao=0, payload_size=8 00:38:35.012 [2024-05-15 01:04:38.193112] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.193120] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.193124] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.233713] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.012 [2024-05-15 01:04:38.233750] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.012 [2024-05-15 01:04:38.233756] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.233763] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8c40) on tqpair=0x1fac580 00:38:35.012 ===================================================== 00:38:35.012 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:38:35.012 ===================================================== 00:38:35.012 Controller Capabilities/Features 00:38:35.012 ================================ 00:38:35.012 Vendor ID: 0000 00:38:35.012 Subsystem Vendor ID: 0000 00:38:35.012 Serial Number: .................... 00:38:35.012 Model Number: ........................................ 00:38:35.012 Firmware Version: 24.05 00:38:35.012 Recommended Arb Burst: 0 00:38:35.012 IEEE OUI Identifier: 00 00 00 00:38:35.012 Multi-path I/O 00:38:35.012 May have multiple subsystem ports: No 00:38:35.012 May have multiple controllers: No 00:38:35.012 Associated with SR-IOV VF: No 00:38:35.012 Max Data Transfer Size: 131072 00:38:35.012 Max Number of Namespaces: 0 00:38:35.012 Max Number of I/O Queues: 1024 00:38:35.012 NVMe Specification Version (VS): 1.3 00:38:35.012 NVMe Specification Version (Identify): 1.3 00:38:35.012 Maximum Queue Entries: 128 00:38:35.012 Contiguous Queues Required: Yes 00:38:35.012 Arbitration Mechanisms Supported 00:38:35.012 Weighted Round Robin: Not Supported 00:38:35.012 Vendor Specific: Not Supported 00:38:35.012 Reset Timeout: 15000 ms 00:38:35.012 Doorbell Stride: 4 bytes 00:38:35.012 NVM Subsystem Reset: Not Supported 00:38:35.012 Command Sets Supported 00:38:35.012 NVM Command Set: Supported 00:38:35.012 Boot Partition: Not Supported 00:38:35.012 Memory Page Size Minimum: 4096 bytes 00:38:35.012 Memory Page Size Maximum: 4096 bytes 00:38:35.012 Persistent Memory Region: Not Supported 00:38:35.012 Optional Asynchronous Events Supported 00:38:35.012 Namespace Attribute Notices: Not Supported 00:38:35.012 Firmware Activation Notices: Not Supported 00:38:35.012 ANA Change Notices: Not Supported 00:38:35.012 PLE Aggregate Log Change Notices: Not Supported 00:38:35.012 LBA Status Info Alert Notices: Not Supported 00:38:35.012 EGE Aggregate Log Change Notices: Not Supported 00:38:35.012 Normal NVM Subsystem Shutdown event: Not Supported 00:38:35.012 Zone Descriptor Change Notices: Not Supported 00:38:35.012 Discovery Log Change Notices: Supported 00:38:35.012 Controller Attributes 00:38:35.012 128-bit Host Identifier: Not Supported 00:38:35.012 Non-Operational Permissive Mode: Not Supported 00:38:35.012 NVM Sets: Not Supported 00:38:35.012 Read Recovery Levels: Not Supported 00:38:35.012 Endurance Groups: Not Supported 00:38:35.012 Predictable Latency Mode: Not Supported 00:38:35.012 Traffic Based Keep ALive: Not Supported 00:38:35.012 Namespace Granularity: Not Supported 00:38:35.012 SQ Associations: Not Supported 00:38:35.012 UUID List: Not Supported 00:38:35.012 Multi-Domain Subsystem: Not Supported 00:38:35.012 Fixed Capacity Management: Not Supported 00:38:35.012 Variable Capacity Management: Not Supported 00:38:35.012 Delete Endurance Group: Not Supported 00:38:35.012 Delete NVM Set: Not Supported 00:38:35.012 Extended LBA Formats Supported: Not Supported 00:38:35.012 Flexible Data Placement Supported: Not Supported 00:38:35.012 00:38:35.012 Controller Memory Buffer Support 00:38:35.012 ================================ 00:38:35.012 Supported: No 00:38:35.012 00:38:35.012 Persistent Memory Region Support 00:38:35.012 ================================ 00:38:35.012 Supported: No 00:38:35.012 00:38:35.012 Admin Command Set Attributes 00:38:35.012 ============================ 00:38:35.012 Security Send/Receive: Not Supported 00:38:35.012 Format NVM: Not Supported 00:38:35.012 Firmware Activate/Download: Not Supported 00:38:35.012 Namespace Management: Not Supported 00:38:35.012 Device Self-Test: Not Supported 00:38:35.012 Directives: Not Supported 00:38:35.012 NVMe-MI: Not Supported 00:38:35.012 Virtualization Management: Not Supported 00:38:35.012 Doorbell Buffer Config: Not Supported 00:38:35.012 Get LBA Status Capability: Not Supported 00:38:35.012 Command & Feature Lockdown Capability: Not Supported 00:38:35.012 Abort Command Limit: 1 00:38:35.012 Async Event Request Limit: 4 00:38:35.012 Number of Firmware Slots: N/A 00:38:35.012 Firmware Slot 1 Read-Only: N/A 00:38:35.012 Firmware Activation Without Reset: N/A 00:38:35.012 Multiple Update Detection Support: N/A 00:38:35.012 Firmware Update Granularity: No Information Provided 00:38:35.012 Per-Namespace SMART Log: No 00:38:35.012 Asymmetric Namespace Access Log Page: Not Supported 00:38:35.012 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:38:35.012 Command Effects Log Page: Not Supported 00:38:35.012 Get Log Page Extended Data: Supported 00:38:35.012 Telemetry Log Pages: Not Supported 00:38:35.012 Persistent Event Log Pages: Not Supported 00:38:35.012 Supported Log Pages Log Page: May Support 00:38:35.012 Commands Supported & Effects Log Page: Not Supported 00:38:35.012 Feature Identifiers & Effects Log Page:May Support 00:38:35.012 NVMe-MI Commands & Effects Log Page: May Support 00:38:35.012 Data Area 4 for Telemetry Log: Not Supported 00:38:35.012 Error Log Page Entries Supported: 128 00:38:35.012 Keep Alive: Not Supported 00:38:35.012 00:38:35.012 NVM Command Set Attributes 00:38:35.012 ========================== 00:38:35.012 Submission Queue Entry Size 00:38:35.012 Max: 1 00:38:35.012 Min: 1 00:38:35.012 Completion Queue Entry Size 00:38:35.012 Max: 1 00:38:35.012 Min: 1 00:38:35.012 Number of Namespaces: 0 00:38:35.012 Compare Command: Not Supported 00:38:35.012 Write Uncorrectable Command: Not Supported 00:38:35.012 Dataset Management Command: Not Supported 00:38:35.012 Write Zeroes Command: Not Supported 00:38:35.012 Set Features Save Field: Not Supported 00:38:35.012 Reservations: Not Supported 00:38:35.012 Timestamp: Not Supported 00:38:35.012 Copy: Not Supported 00:38:35.012 Volatile Write Cache: Not Present 00:38:35.012 Atomic Write Unit (Normal): 1 00:38:35.012 Atomic Write Unit (PFail): 1 00:38:35.012 Atomic Compare & Write Unit: 1 00:38:35.012 Fused Compare & Write: Supported 00:38:35.012 Scatter-Gather List 00:38:35.012 SGL Command Set: Supported 00:38:35.012 SGL Keyed: Supported 00:38:35.012 SGL Bit Bucket Descriptor: Not Supported 00:38:35.012 SGL Metadata Pointer: Not Supported 00:38:35.012 Oversized SGL: Not Supported 00:38:35.012 SGL Metadata Address: Not Supported 00:38:35.012 SGL Offset: Supported 00:38:35.012 Transport SGL Data Block: Not Supported 00:38:35.012 Replay Protected Memory Block: Not Supported 00:38:35.012 00:38:35.012 Firmware Slot Information 00:38:35.012 ========================= 00:38:35.012 Active slot: 0 00:38:35.012 00:38:35.012 00:38:35.012 Error Log 00:38:35.012 ========= 00:38:35.012 00:38:35.012 Active Namespaces 00:38:35.012 ================= 00:38:35.012 Discovery Log Page 00:38:35.012 ================== 00:38:35.012 Generation Counter: 2 00:38:35.012 Number of Records: 2 00:38:35.012 Record Format: 0 00:38:35.012 00:38:35.012 Discovery Log Entry 0 00:38:35.012 ---------------------- 00:38:35.012 Transport Type: 3 (TCP) 00:38:35.012 Address Family: 1 (IPv4) 00:38:35.012 Subsystem Type: 3 (Current Discovery Subsystem) 00:38:35.012 Entry Flags: 00:38:35.012 Duplicate Returned Information: 1 00:38:35.012 Explicit Persistent Connection Support for Discovery: 1 00:38:35.012 Transport Requirements: 00:38:35.012 Secure Channel: Not Required 00:38:35.012 Port ID: 0 (0x0000) 00:38:35.012 Controller ID: 65535 (0xffff) 00:38:35.012 Admin Max SQ Size: 128 00:38:35.012 Transport Service Identifier: 4420 00:38:35.012 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:38:35.012 Transport Address: 10.0.0.2 00:38:35.012 Discovery Log Entry 1 00:38:35.012 ---------------------- 00:38:35.012 Transport Type: 3 (TCP) 00:38:35.012 Address Family: 1 (IPv4) 00:38:35.012 Subsystem Type: 2 (NVM Subsystem) 00:38:35.012 Entry Flags: 00:38:35.012 Duplicate Returned Information: 0 00:38:35.012 Explicit Persistent Connection Support for Discovery: 0 00:38:35.012 Transport Requirements: 00:38:35.012 Secure Channel: Not Required 00:38:35.012 Port ID: 0 (0x0000) 00:38:35.012 Controller ID: 65535 (0xffff) 00:38:35.012 Admin Max SQ Size: 128 00:38:35.012 Transport Service Identifier: 4420 00:38:35.012 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:38:35.012 Transport Address: 10.0.0.2 [2024-05-15 01:04:38.233966] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:38:35.012 [2024-05-15 01:04:38.233990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:35.012 [2024-05-15 01:04:38.233999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:35.012 [2024-05-15 01:04:38.234006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:35.012 [2024-05-15 01:04:38.234013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:35.012 [2024-05-15 01:04:38.234029] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234034] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234039] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.012 [2024-05-15 01:04:38.234052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.012 [2024-05-15 01:04:38.234084] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.012 [2024-05-15 01:04:38.234188] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.012 [2024-05-15 01:04:38.234196] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.012 [2024-05-15 01:04:38.234201] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234206] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.012 [2024-05-15 01:04:38.234216] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234221] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234226] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.012 [2024-05-15 01:04:38.234234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.012 [2024-05-15 01:04:38.234260] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.012 [2024-05-15 01:04:38.234346] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.012 [2024-05-15 01:04:38.234354] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.012 [2024-05-15 01:04:38.234358] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234363] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.012 [2024-05-15 01:04:38.234370] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:38:35.012 [2024-05-15 01:04:38.234375] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:38:35.012 [2024-05-15 01:04:38.234387] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234392] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234396] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.012 [2024-05-15 01:04:38.234404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.012 [2024-05-15 01:04:38.234424] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.012 [2024-05-15 01:04:38.234482] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.012 [2024-05-15 01:04:38.234489] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.012 [2024-05-15 01:04:38.234494] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234499] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.012 [2024-05-15 01:04:38.234512] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234517] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234522] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.012 [2024-05-15 01:04:38.234530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.012 [2024-05-15 01:04:38.234549] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.012 [2024-05-15 01:04:38.234625] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.012 [2024-05-15 01:04:38.234636] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.012 [2024-05-15 01:04:38.234641] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234645] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.012 [2024-05-15 01:04:38.234659] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234664] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.012 [2024-05-15 01:04:38.234669] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.012 [2024-05-15 01:04:38.234677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.012 [2024-05-15 01:04:38.234701] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.012 [2024-05-15 01:04:38.234755] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.012 [2024-05-15 01:04:38.234763] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.234767] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.234772] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.234785] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.234790] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.234795] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.234803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.234823] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.234877] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.234885] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.234889] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.234894] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.234907] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.234912] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.234916] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.234924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.234944] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.235012] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.235023] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.235028] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235032] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.235046] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235051] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235056] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.235064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.235085] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.235140] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.235148] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.235152] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235157] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.235169] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235175] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235179] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.235187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.235206] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.235269] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.235277] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.235281] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235286] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.235298] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235304] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235308] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.235316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.235335] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.235389] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.235397] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.235401] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235406] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.235418] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235423] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235428] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.235436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.235454] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.235510] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.235518] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.235522] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235527] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.235539] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235545] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235549] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.235557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.235576] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.235650] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.235660] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.235664] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235669] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.235682] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235687] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235692] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.235700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.235721] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.235778] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.235787] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.235791] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235796] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.235809] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235814] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235819] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.235827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.235846] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.235900] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.235908] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.235912] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235917] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.235929] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235934] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.235939] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.235947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.235966] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.236022] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.236030] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.236034] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236039] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.236051] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236057] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236061] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.236069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.236088] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.236141] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.236149] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.236153] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236158] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.236170] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236175] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236180] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.236188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.236207] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.236263] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.236271] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.236275] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236280] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.236292] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236297] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236302] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.236309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.236328] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.236382] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.236389] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.236394] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236398] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.236411] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236416] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236421] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.236429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.236447] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.236501] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.236508] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.236513] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236518] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.236530] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236536] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.236540] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.236548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.236567] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.240617] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.240636] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.240642] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.240647] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.240662] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.240668] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.240673] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fac580) 00:38:35.013 [2024-05-15 01:04:38.240682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.013 [2024-05-15 01:04:38.240707] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ff8ae0, cid 3, qid 0 00:38:35.013 [2024-05-15 01:04:38.240771] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.013 [2024-05-15 01:04:38.240779] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.013 [2024-05-15 01:04:38.240783] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.013 [2024-05-15 01:04:38.240788] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ff8ae0) on tqpair=0x1fac580 00:38:35.013 [2024-05-15 01:04:38.240798] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:38:35.013 00:38:35.013 01:04:38 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:38:35.013 [2024-05-15 01:04:38.273479] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:38:35.013 [2024-05-15 01:04:38.273520] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104472 ] 00:38:35.276 [2024-05-15 01:04:38.410340] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:38:35.276 [2024-05-15 01:04:38.410411] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:38:35.276 [2024-05-15 01:04:38.410419] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:38:35.276 [2024-05-15 01:04:38.410434] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:38:35.276 [2024-05-15 01:04:38.410445] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:38:35.276 [2024-05-15 01:04:38.410582] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:38:35.276 [2024-05-15 01:04:38.414658] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1548580 0 00:38:35.276 [2024-05-15 01:04:38.422615] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:38:35.276 [2024-05-15 01:04:38.422637] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:38:35.276 [2024-05-15 01:04:38.422643] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:38:35.276 [2024-05-15 01:04:38.422647] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:38:35.276 [2024-05-15 01:04:38.422694] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.276 [2024-05-15 01:04:38.422702] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.276 [2024-05-15 01:04:38.422706] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548580) 00:38:35.277 [2024-05-15 01:04:38.422722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:38:35.277 [2024-05-15 01:04:38.422755] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15946c0, cid 0, qid 0 00:38:35.277 [2024-05-15 01:04:38.430619] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.277 [2024-05-15 01:04:38.430641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.277 [2024-05-15 01:04:38.430646] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.430651] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15946c0) on tqpair=0x1548580 00:38:35.277 [2024-05-15 01:04:38.430667] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:38:35.277 [2024-05-15 01:04:38.430675] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:38:35.277 [2024-05-15 01:04:38.430682] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:38:35.277 [2024-05-15 01:04:38.430698] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.430704] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.430708] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548580) 00:38:35.277 [2024-05-15 01:04:38.430717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.277 [2024-05-15 01:04:38.430746] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15946c0, cid 0, qid 0 00:38:35.277 [2024-05-15 01:04:38.430814] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.277 [2024-05-15 01:04:38.430821] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.277 [2024-05-15 01:04:38.430825] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.430830] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15946c0) on tqpair=0x1548580 00:38:35.277 [2024-05-15 01:04:38.430837] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:38:35.277 [2024-05-15 01:04:38.430845] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:38:35.277 [2024-05-15 01:04:38.430853] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.430858] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.430862] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548580) 00:38:35.277 [2024-05-15 01:04:38.430870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.277 [2024-05-15 01:04:38.430889] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15946c0, cid 0, qid 0 00:38:35.277 [2024-05-15 01:04:38.430943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.277 [2024-05-15 01:04:38.430950] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.277 [2024-05-15 01:04:38.430954] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.430959] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15946c0) on tqpair=0x1548580 00:38:35.277 [2024-05-15 01:04:38.430966] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:38:35.277 [2024-05-15 01:04:38.430975] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:38:35.277 [2024-05-15 01:04:38.430983] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.430987] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.430991] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548580) 00:38:35.277 [2024-05-15 01:04:38.430999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.277 [2024-05-15 01:04:38.431029] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15946c0, cid 0, qid 0 00:38:35.277 [2024-05-15 01:04:38.431092] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.277 [2024-05-15 01:04:38.431099] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.277 [2024-05-15 01:04:38.431103] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.431107] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15946c0) on tqpair=0x1548580 00:38:35.277 [2024-05-15 01:04:38.431115] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:38:35.277 [2024-05-15 01:04:38.431126] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.431131] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.431135] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548580) 00:38:35.277 [2024-05-15 01:04:38.431142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.277 [2024-05-15 01:04:38.431161] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15946c0, cid 0, qid 0 00:38:35.277 [2024-05-15 01:04:38.431221] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.277 [2024-05-15 01:04:38.431228] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.277 [2024-05-15 01:04:38.431232] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.431236] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15946c0) on tqpair=0x1548580 00:38:35.277 [2024-05-15 01:04:38.431243] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:38:35.277 [2024-05-15 01:04:38.431248] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:38:35.277 [2024-05-15 01:04:38.431257] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:38:35.277 [2024-05-15 01:04:38.431363] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:38:35.277 [2024-05-15 01:04:38.431368] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:38:35.277 [2024-05-15 01:04:38.431377] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.431382] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.431386] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548580) 00:38:35.277 [2024-05-15 01:04:38.431393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.277 [2024-05-15 01:04:38.431413] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15946c0, cid 0, qid 0 00:38:35.277 [2024-05-15 01:04:38.431473] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.277 [2024-05-15 01:04:38.431480] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.277 [2024-05-15 01:04:38.431483] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.431488] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15946c0) on tqpair=0x1548580 00:38:35.277 [2024-05-15 01:04:38.431495] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:38:35.277 [2024-05-15 01:04:38.431505] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.431510] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.431514] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548580) 00:38:35.277 [2024-05-15 01:04:38.431522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.277 [2024-05-15 01:04:38.431540] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15946c0, cid 0, qid 0 00:38:35.277 [2024-05-15 01:04:38.431594] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.277 [2024-05-15 01:04:38.431621] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.277 [2024-05-15 01:04:38.431626] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.431630] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15946c0) on tqpair=0x1548580 00:38:35.277 [2024-05-15 01:04:38.431637] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:38:35.277 [2024-05-15 01:04:38.431642] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:38:35.277 [2024-05-15 01:04:38.431652] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:38:35.277 [2024-05-15 01:04:38.431668] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:38:35.277 [2024-05-15 01:04:38.431678] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.277 [2024-05-15 01:04:38.431682] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548580) 00:38:35.277 [2024-05-15 01:04:38.431690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.277 [2024-05-15 01:04:38.431712] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15946c0, cid 0, qid 0 00:38:35.277 [2024-05-15 01:04:38.431817] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:35.277 [2024-05-15 01:04:38.431824] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:35.278 [2024-05-15 01:04:38.431828] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.431833] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548580): datao=0, datal=4096, cccid=0 00:38:35.278 [2024-05-15 01:04:38.431838] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15946c0) on tqpair(0x1548580): expected_datao=0, payload_size=4096 00:38:35.278 [2024-05-15 01:04:38.431843] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.431852] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.431856] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.431865] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.278 [2024-05-15 01:04:38.431871] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.278 [2024-05-15 01:04:38.431875] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.431879] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15946c0) on tqpair=0x1548580 00:38:35.278 [2024-05-15 01:04:38.431889] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:38:35.278 [2024-05-15 01:04:38.431896] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:38:35.278 [2024-05-15 01:04:38.431901] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:38:35.278 [2024-05-15 01:04:38.431906] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:38:35.278 [2024-05-15 01:04:38.431911] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:38:35.278 [2024-05-15 01:04:38.431916] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:38:35.278 [2024-05-15 01:04:38.431925] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:38:35.278 [2024-05-15 01:04:38.431938] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.431943] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.431947] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548580) 00:38:35.278 [2024-05-15 01:04:38.431955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:35.278 [2024-05-15 01:04:38.431976] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15946c0, cid 0, qid 0 00:38:35.278 [2024-05-15 01:04:38.432038] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.278 [2024-05-15 01:04:38.432045] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.278 [2024-05-15 01:04:38.432049] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432053] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15946c0) on tqpair=0x1548580 00:38:35.278 [2024-05-15 01:04:38.432063] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432068] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432072] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548580) 00:38:35.278 [2024-05-15 01:04:38.432079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:35.278 [2024-05-15 01:04:38.432085] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432090] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432094] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1548580) 00:38:35.278 [2024-05-15 01:04:38.432100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:35.278 [2024-05-15 01:04:38.432106] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432110] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432114] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1548580) 00:38:35.278 [2024-05-15 01:04:38.432120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:35.278 [2024-05-15 01:04:38.432127] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432131] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432135] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.278 [2024-05-15 01:04:38.432141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:35.278 [2024-05-15 01:04:38.432147] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:38:35.278 [2024-05-15 01:04:38.432160] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:38:35.278 [2024-05-15 01:04:38.432168] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432172] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548580) 00:38:35.278 [2024-05-15 01:04:38.432179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.278 [2024-05-15 01:04:38.432200] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15946c0, cid 0, qid 0 00:38:35.278 [2024-05-15 01:04:38.432207] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594820, cid 1, qid 0 00:38:35.278 [2024-05-15 01:04:38.432212] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594980, cid 2, qid 0 00:38:35.278 [2024-05-15 01:04:38.432218] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.278 [2024-05-15 01:04:38.432222] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594c40, cid 4, qid 0 00:38:35.278 [2024-05-15 01:04:38.432318] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.278 [2024-05-15 01:04:38.432325] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.278 [2024-05-15 01:04:38.432329] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432333] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594c40) on tqpair=0x1548580 00:38:35.278 [2024-05-15 01:04:38.432340] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:38:35.278 [2024-05-15 01:04:38.432346] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:38:35.278 [2024-05-15 01:04:38.432359] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:38:35.278 [2024-05-15 01:04:38.432366] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:38:35.278 [2024-05-15 01:04:38.432374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432378] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432383] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548580) 00:38:35.278 [2024-05-15 01:04:38.432390] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:35.278 [2024-05-15 01:04:38.432410] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594c40, cid 4, qid 0 00:38:35.278 [2024-05-15 01:04:38.432469] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.278 [2024-05-15 01:04:38.432476] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.278 [2024-05-15 01:04:38.432480] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432485] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594c40) on tqpair=0x1548580 00:38:35.278 [2024-05-15 01:04:38.432541] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:38:35.278 [2024-05-15 01:04:38.432553] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:38:35.278 [2024-05-15 01:04:38.432562] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432567] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548580) 00:38:35.278 [2024-05-15 01:04:38.432574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.278 [2024-05-15 01:04:38.432606] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594c40, cid 4, qid 0 00:38:35.278 [2024-05-15 01:04:38.432797] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:35.278 [2024-05-15 01:04:38.432814] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:35.278 [2024-05-15 01:04:38.432820] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432824] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548580): datao=0, datal=4096, cccid=4 00:38:35.278 [2024-05-15 01:04:38.432829] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1594c40) on tqpair(0x1548580): expected_datao=0, payload_size=4096 00:38:35.278 [2024-05-15 01:04:38.432834] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432843] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432847] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432856] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.278 [2024-05-15 01:04:38.432863] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.278 [2024-05-15 01:04:38.432867] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.278 [2024-05-15 01:04:38.432871] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594c40) on tqpair=0x1548580 00:38:35.279 [2024-05-15 01:04:38.432891] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:38:35.279 [2024-05-15 01:04:38.432907] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:38:35.279 [2024-05-15 01:04:38.432919] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:38:35.279 [2024-05-15 01:04:38.432928] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.432932] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548580) 00:38:35.279 [2024-05-15 01:04:38.432941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.279 [2024-05-15 01:04:38.432967] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594c40, cid 4, qid 0 00:38:35.279 [2024-05-15 01:04:38.433061] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:35.279 [2024-05-15 01:04:38.433068] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:35.279 [2024-05-15 01:04:38.433072] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433076] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548580): datao=0, datal=4096, cccid=4 00:38:35.279 [2024-05-15 01:04:38.433081] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1594c40) on tqpair(0x1548580): expected_datao=0, payload_size=4096 00:38:35.279 [2024-05-15 01:04:38.433086] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433093] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433098] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433107] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.279 [2024-05-15 01:04:38.433113] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.279 [2024-05-15 01:04:38.433117] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433121] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594c40) on tqpair=0x1548580 00:38:35.279 [2024-05-15 01:04:38.433139] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:38:35.279 [2024-05-15 01:04:38.433151] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:38:35.279 [2024-05-15 01:04:38.433160] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433164] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548580) 00:38:35.279 [2024-05-15 01:04:38.433172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.279 [2024-05-15 01:04:38.433193] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594c40, cid 4, qid 0 00:38:35.279 [2024-05-15 01:04:38.433266] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:35.279 [2024-05-15 01:04:38.433273] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:35.279 [2024-05-15 01:04:38.433277] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433281] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548580): datao=0, datal=4096, cccid=4 00:38:35.279 [2024-05-15 01:04:38.433286] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1594c40) on tqpair(0x1548580): expected_datao=0, payload_size=4096 00:38:35.279 [2024-05-15 01:04:38.433291] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433298] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433302] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433311] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.279 [2024-05-15 01:04:38.433317] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.279 [2024-05-15 01:04:38.433321] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433325] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594c40) on tqpair=0x1548580 00:38:35.279 [2024-05-15 01:04:38.433335] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:38:35.279 [2024-05-15 01:04:38.433344] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:38:35.279 [2024-05-15 01:04:38.433355] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:38:35.279 [2024-05-15 01:04:38.433362] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:38:35.279 [2024-05-15 01:04:38.433368] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:38:35.279 [2024-05-15 01:04:38.433374] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:38:35.279 [2024-05-15 01:04:38.433378] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:38:35.279 [2024-05-15 01:04:38.433384] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:38:35.279 [2024-05-15 01:04:38.433415] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433422] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548580) 00:38:35.279 [2024-05-15 01:04:38.433430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.279 [2024-05-15 01:04:38.433437] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433442] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433446] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1548580) 00:38:35.279 [2024-05-15 01:04:38.433452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:38:35.279 [2024-05-15 01:04:38.433482] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594c40, cid 4, qid 0 00:38:35.279 [2024-05-15 01:04:38.433490] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594da0, cid 5, qid 0 00:38:35.279 [2024-05-15 01:04:38.433562] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.279 [2024-05-15 01:04:38.433569] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.279 [2024-05-15 01:04:38.433573] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433577] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594c40) on tqpair=0x1548580 00:38:35.279 [2024-05-15 01:04:38.433586] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.279 [2024-05-15 01:04:38.433592] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.279 [2024-05-15 01:04:38.433609] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433614] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594da0) on tqpair=0x1548580 00:38:35.279 [2024-05-15 01:04:38.433627] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433632] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1548580) 00:38:35.279 [2024-05-15 01:04:38.433639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.279 [2024-05-15 01:04:38.433660] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594da0, cid 5, qid 0 00:38:35.279 [2024-05-15 01:04:38.433722] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.279 [2024-05-15 01:04:38.433729] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.279 [2024-05-15 01:04:38.433734] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433738] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594da0) on tqpair=0x1548580 00:38:35.279 [2024-05-15 01:04:38.433750] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433755] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1548580) 00:38:35.279 [2024-05-15 01:04:38.433762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.279 [2024-05-15 01:04:38.433780] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594da0, cid 5, qid 0 00:38:35.279 [2024-05-15 01:04:38.433837] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.279 [2024-05-15 01:04:38.433843] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.279 [2024-05-15 01:04:38.433847] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433852] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594da0) on tqpair=0x1548580 00:38:35.279 [2024-05-15 01:04:38.433863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433869] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1548580) 00:38:35.279 [2024-05-15 01:04:38.433876] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.279 [2024-05-15 01:04:38.433894] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594da0, cid 5, qid 0 00:38:35.279 [2024-05-15 01:04:38.433947] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.279 [2024-05-15 01:04:38.433954] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.279 [2024-05-15 01:04:38.433958] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433962] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594da0) on tqpair=0x1548580 00:38:35.279 [2024-05-15 01:04:38.433977] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.433982] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1548580) 00:38:35.279 [2024-05-15 01:04:38.433990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.279 [2024-05-15 01:04:38.433997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.434002] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548580) 00:38:35.279 [2024-05-15 01:04:38.434008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.279 [2024-05-15 01:04:38.434016] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.434020] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1548580) 00:38:35.279 [2024-05-15 01:04:38.434027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.279 [2024-05-15 01:04:38.434035] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.434039] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1548580) 00:38:35.279 [2024-05-15 01:04:38.434046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.279 [2024-05-15 01:04:38.434067] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594da0, cid 5, qid 0 00:38:35.279 [2024-05-15 01:04:38.434074] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594c40, cid 4, qid 0 00:38:35.279 [2024-05-15 01:04:38.434079] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594f00, cid 6, qid 0 00:38:35.279 [2024-05-15 01:04:38.434084] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1595060, cid 7, qid 0 00:38:35.279 [2024-05-15 01:04:38.434226] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:35.279 [2024-05-15 01:04:38.434234] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:35.279 [2024-05-15 01:04:38.434239] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:35.279 [2024-05-15 01:04:38.434243] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548580): datao=0, datal=8192, cccid=5 00:38:35.279 [2024-05-15 01:04:38.434248] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1594da0) on tqpair(0x1548580): expected_datao=0, payload_size=8192 00:38:35.280 [2024-05-15 01:04:38.434253] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434270] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434275] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434281] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:35.280 [2024-05-15 01:04:38.434288] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:35.280 [2024-05-15 01:04:38.434292] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434295] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548580): datao=0, datal=512, cccid=4 00:38:35.280 [2024-05-15 01:04:38.434300] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1594c40) on tqpair(0x1548580): expected_datao=0, payload_size=512 00:38:35.280 [2024-05-15 01:04:38.434305] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434312] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434315] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434321] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:35.280 [2024-05-15 01:04:38.434327] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:35.280 [2024-05-15 01:04:38.434331] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434335] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548580): datao=0, datal=512, cccid=6 00:38:35.280 [2024-05-15 01:04:38.434340] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1594f00) on tqpair(0x1548580): expected_datao=0, payload_size=512 00:38:35.280 [2024-05-15 01:04:38.434344] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434361] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434365] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434370] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:35.280 [2024-05-15 01:04:38.434378] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:35.280 [2024-05-15 01:04:38.434381] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434385] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548580): datao=0, datal=4096, cccid=7 00:38:35.280 [2024-05-15 01:04:38.434401] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1595060) on tqpair(0x1548580): expected_datao=0, payload_size=4096 00:38:35.280 [2024-05-15 01:04:38.434405] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434412] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434417] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434425] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.280 [2024-05-15 01:04:38.434431] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.280 [2024-05-15 01:04:38.434435] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434439] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594da0) on tqpair=0x1548580 00:38:35.280 [2024-05-15 01:04:38.434457] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.280 [2024-05-15 01:04:38.434464] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.280 [2024-05-15 01:04:38.434467] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434471] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594c40) on tqpair=0x1548580 00:38:35.280 [2024-05-15 01:04:38.434483] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.280 [2024-05-15 01:04:38.434489] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.280 [2024-05-15 01:04:38.434493] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434497] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594f00) on tqpair=0x1548580 00:38:35.280 [2024-05-15 01:04:38.434508] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.280 [2024-05-15 01:04:38.434515] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.280 [2024-05-15 01:04:38.434519] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.280 [2024-05-15 01:04:38.434523] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1595060) on tqpair=0x1548580 00:38:35.280 ===================================================== 00:38:35.280 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:35.280 ===================================================== 00:38:35.280 Controller Capabilities/Features 00:38:35.280 ================================ 00:38:35.280 Vendor ID: 8086 00:38:35.280 Subsystem Vendor ID: 8086 00:38:35.280 Serial Number: SPDK00000000000001 00:38:35.280 Model Number: SPDK bdev Controller 00:38:35.280 Firmware Version: 24.05 00:38:35.280 Recommended Arb Burst: 6 00:38:35.280 IEEE OUI Identifier: e4 d2 5c 00:38:35.280 Multi-path I/O 00:38:35.280 May have multiple subsystem ports: Yes 00:38:35.280 May have multiple controllers: Yes 00:38:35.280 Associated with SR-IOV VF: No 00:38:35.280 Max Data Transfer Size: 131072 00:38:35.280 Max Number of Namespaces: 32 00:38:35.280 Max Number of I/O Queues: 127 00:38:35.280 NVMe Specification Version (VS): 1.3 00:38:35.280 NVMe Specification Version (Identify): 1.3 00:38:35.280 Maximum Queue Entries: 128 00:38:35.280 Contiguous Queues Required: Yes 00:38:35.280 Arbitration Mechanisms Supported 00:38:35.280 Weighted Round Robin: Not Supported 00:38:35.280 Vendor Specific: Not Supported 00:38:35.280 Reset Timeout: 15000 ms 00:38:35.280 Doorbell Stride: 4 bytes 00:38:35.280 NVM Subsystem Reset: Not Supported 00:38:35.280 Command Sets Supported 00:38:35.280 NVM Command Set: Supported 00:38:35.280 Boot Partition: Not Supported 00:38:35.280 Memory Page Size Minimum: 4096 bytes 00:38:35.280 Memory Page Size Maximum: 4096 bytes 00:38:35.280 Persistent Memory Region: Not Supported 00:38:35.280 Optional Asynchronous Events Supported 00:38:35.280 Namespace Attribute Notices: Supported 00:38:35.280 Firmware Activation Notices: Not Supported 00:38:35.280 ANA Change Notices: Not Supported 00:38:35.280 PLE Aggregate Log Change Notices: Not Supported 00:38:35.280 LBA Status Info Alert Notices: Not Supported 00:38:35.280 EGE Aggregate Log Change Notices: Not Supported 00:38:35.280 Normal NVM Subsystem Shutdown event: Not Supported 00:38:35.280 Zone Descriptor Change Notices: Not Supported 00:38:35.280 Discovery Log Change Notices: Not Supported 00:38:35.280 Controller Attributes 00:38:35.280 128-bit Host Identifier: Supported 00:38:35.280 Non-Operational Permissive Mode: Not Supported 00:38:35.280 NVM Sets: Not Supported 00:38:35.280 Read Recovery Levels: Not Supported 00:38:35.280 Endurance Groups: Not Supported 00:38:35.280 Predictable Latency Mode: Not Supported 00:38:35.280 Traffic Based Keep ALive: Not Supported 00:38:35.280 Namespace Granularity: Not Supported 00:38:35.280 SQ Associations: Not Supported 00:38:35.280 UUID List: Not Supported 00:38:35.280 Multi-Domain Subsystem: Not Supported 00:38:35.280 Fixed Capacity Management: Not Supported 00:38:35.280 Variable Capacity Management: Not Supported 00:38:35.280 Delete Endurance Group: Not Supported 00:38:35.280 Delete NVM Set: Not Supported 00:38:35.280 Extended LBA Formats Supported: Not Supported 00:38:35.280 Flexible Data Placement Supported: Not Supported 00:38:35.280 00:38:35.280 Controller Memory Buffer Support 00:38:35.280 ================================ 00:38:35.280 Supported: No 00:38:35.280 00:38:35.280 Persistent Memory Region Support 00:38:35.280 ================================ 00:38:35.280 Supported: No 00:38:35.280 00:38:35.280 Admin Command Set Attributes 00:38:35.280 ============================ 00:38:35.280 Security Send/Receive: Not Supported 00:38:35.280 Format NVM: Not Supported 00:38:35.280 Firmware Activate/Download: Not Supported 00:38:35.280 Namespace Management: Not Supported 00:38:35.280 Device Self-Test: Not Supported 00:38:35.280 Directives: Not Supported 00:38:35.280 NVMe-MI: Not Supported 00:38:35.280 Virtualization Management: Not Supported 00:38:35.280 Doorbell Buffer Config: Not Supported 00:38:35.280 Get LBA Status Capability: Not Supported 00:38:35.280 Command & Feature Lockdown Capability: Not Supported 00:38:35.280 Abort Command Limit: 4 00:38:35.280 Async Event Request Limit: 4 00:38:35.280 Number of Firmware Slots: N/A 00:38:35.280 Firmware Slot 1 Read-Only: N/A 00:38:35.280 Firmware Activation Without Reset: N/A 00:38:35.280 Multiple Update Detection Support: N/A 00:38:35.280 Firmware Update Granularity: No Information Provided 00:38:35.280 Per-Namespace SMART Log: No 00:38:35.280 Asymmetric Namespace Access Log Page: Not Supported 00:38:35.280 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:38:35.280 Command Effects Log Page: Supported 00:38:35.280 Get Log Page Extended Data: Supported 00:38:35.280 Telemetry Log Pages: Not Supported 00:38:35.280 Persistent Event Log Pages: Not Supported 00:38:35.280 Supported Log Pages Log Page: May Support 00:38:35.280 Commands Supported & Effects Log Page: Not Supported 00:38:35.280 Feature Identifiers & Effects Log Page:May Support 00:38:35.280 NVMe-MI Commands & Effects Log Page: May Support 00:38:35.280 Data Area 4 for Telemetry Log: Not Supported 00:38:35.280 Error Log Page Entries Supported: 128 00:38:35.280 Keep Alive: Supported 00:38:35.280 Keep Alive Granularity: 10000 ms 00:38:35.280 00:38:35.280 NVM Command Set Attributes 00:38:35.280 ========================== 00:38:35.280 Submission Queue Entry Size 00:38:35.280 Max: 64 00:38:35.280 Min: 64 00:38:35.280 Completion Queue Entry Size 00:38:35.280 Max: 16 00:38:35.280 Min: 16 00:38:35.280 Number of Namespaces: 32 00:38:35.280 Compare Command: Supported 00:38:35.280 Write Uncorrectable Command: Not Supported 00:38:35.280 Dataset Management Command: Supported 00:38:35.280 Write Zeroes Command: Supported 00:38:35.280 Set Features Save Field: Not Supported 00:38:35.280 Reservations: Supported 00:38:35.280 Timestamp: Not Supported 00:38:35.280 Copy: Supported 00:38:35.280 Volatile Write Cache: Present 00:38:35.280 Atomic Write Unit (Normal): 1 00:38:35.280 Atomic Write Unit (PFail): 1 00:38:35.280 Atomic Compare & Write Unit: 1 00:38:35.280 Fused Compare & Write: Supported 00:38:35.280 Scatter-Gather List 00:38:35.280 SGL Command Set: Supported 00:38:35.280 SGL Keyed: Supported 00:38:35.280 SGL Bit Bucket Descriptor: Not Supported 00:38:35.280 SGL Metadata Pointer: Not Supported 00:38:35.280 Oversized SGL: Not Supported 00:38:35.280 SGL Metadata Address: Not Supported 00:38:35.280 SGL Offset: Supported 00:38:35.280 Transport SGL Data Block: Not Supported 00:38:35.280 Replay Protected Memory Block: Not Supported 00:38:35.280 00:38:35.280 Firmware Slot Information 00:38:35.280 ========================= 00:38:35.280 Active slot: 1 00:38:35.280 Slot 1 Firmware Revision: 24.05 00:38:35.280 00:38:35.280 00:38:35.280 Commands Supported and Effects 00:38:35.280 ============================== 00:38:35.280 Admin Commands 00:38:35.280 -------------- 00:38:35.280 Get Log Page (02h): Supported 00:38:35.280 Identify (06h): Supported 00:38:35.280 Abort (08h): Supported 00:38:35.280 Set Features (09h): Supported 00:38:35.280 Get Features (0Ah): Supported 00:38:35.280 Asynchronous Event Request (0Ch): Supported 00:38:35.280 Keep Alive (18h): Supported 00:38:35.280 I/O Commands 00:38:35.280 ------------ 00:38:35.280 Flush (00h): Supported LBA-Change 00:38:35.280 Write (01h): Supported LBA-Change 00:38:35.280 Read (02h): Supported 00:38:35.280 Compare (05h): Supported 00:38:35.280 Write Zeroes (08h): Supported LBA-Change 00:38:35.280 Dataset Management (09h): Supported LBA-Change 00:38:35.280 Copy (19h): Supported LBA-Change 00:38:35.280 Unknown (79h): Supported LBA-Change 00:38:35.280 Unknown (7Ah): Supported 00:38:35.280 00:38:35.280 Error Log 00:38:35.280 ========= 00:38:35.280 00:38:35.280 Arbitration 00:38:35.280 =========== 00:38:35.280 Arbitration Burst: 1 00:38:35.280 00:38:35.280 Power Management 00:38:35.280 ================ 00:38:35.280 Number of Power States: 1 00:38:35.281 Current Power State: Power State #0 00:38:35.281 Power State #0: 00:38:35.281 Max Power: 0.00 W 00:38:35.281 Non-Operational State: Operational 00:38:35.281 Entry Latency: Not Reported 00:38:35.281 Exit Latency: Not Reported 00:38:35.281 Relative Read Throughput: 0 00:38:35.281 Relative Read Latency: 0 00:38:35.281 Relative Write Throughput: 0 00:38:35.281 Relative Write Latency: 0 00:38:35.281 Idle Power: Not Reported 00:38:35.281 Active Power: Not Reported 00:38:35.281 Non-Operational Permissive Mode: Not Supported 00:38:35.281 00:38:35.281 Health Information 00:38:35.281 ================== 00:38:35.281 Critical Warnings: 00:38:35.281 Available Spare Space: OK 00:38:35.281 Temperature: OK 00:38:35.281 Device Reliability: OK 00:38:35.281 Read Only: No 00:38:35.281 Volatile Memory Backup: OK 00:38:35.281 Current Temperature: 0 Kelvin (-273 Celsius) 00:38:35.281 Temperature Threshold: [2024-05-15 01:04:38.438656] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.438667] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1548580) 00:38:35.281 [2024-05-15 01:04:38.438676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.281 [2024-05-15 01:04:38.438705] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1595060, cid 7, qid 0 00:38:35.281 [2024-05-15 01:04:38.438779] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.281 [2024-05-15 01:04:38.438787] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.281 [2024-05-15 01:04:38.438791] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.438795] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1595060) on tqpair=0x1548580 00:38:35.281 [2024-05-15 01:04:38.438839] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:38:35.281 [2024-05-15 01:04:38.438854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:35.281 [2024-05-15 01:04:38.438861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:35.281 [2024-05-15 01:04:38.438868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:35.281 [2024-05-15 01:04:38.438874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:35.281 [2024-05-15 01:04:38.438884] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.438888] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.438892] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.281 [2024-05-15 01:04:38.438900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.281 [2024-05-15 01:04:38.438923] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.281 [2024-05-15 01:04:38.438982] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.281 [2024-05-15 01:04:38.438989] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.281 [2024-05-15 01:04:38.438993] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.438997] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.281 [2024-05-15 01:04:38.439016] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439022] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439026] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.281 [2024-05-15 01:04:38.439033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.281 [2024-05-15 01:04:38.439057] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.281 [2024-05-15 01:04:38.439137] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.281 [2024-05-15 01:04:38.439144] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.281 [2024-05-15 01:04:38.439148] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439152] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.281 [2024-05-15 01:04:38.439159] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:38:35.281 [2024-05-15 01:04:38.439164] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:38:35.281 [2024-05-15 01:04:38.439174] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439179] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439182] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.281 [2024-05-15 01:04:38.439190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.281 [2024-05-15 01:04:38.439208] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.281 [2024-05-15 01:04:38.439268] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.281 [2024-05-15 01:04:38.439275] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.281 [2024-05-15 01:04:38.439279] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439283] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.281 [2024-05-15 01:04:38.439296] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439301] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439305] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.281 [2024-05-15 01:04:38.439312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.281 [2024-05-15 01:04:38.439330] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.281 [2024-05-15 01:04:38.439387] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.281 [2024-05-15 01:04:38.439394] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.281 [2024-05-15 01:04:38.439398] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439402] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.281 [2024-05-15 01:04:38.439413] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439418] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439422] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.281 [2024-05-15 01:04:38.439430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.281 [2024-05-15 01:04:38.439448] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.281 [2024-05-15 01:04:38.439501] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.281 [2024-05-15 01:04:38.439508] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.281 [2024-05-15 01:04:38.439512] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439516] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.281 [2024-05-15 01:04:38.439528] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439533] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439537] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.281 [2024-05-15 01:04:38.439545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.281 [2024-05-15 01:04:38.439563] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.281 [2024-05-15 01:04:38.439632] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.281 [2024-05-15 01:04:38.439641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.281 [2024-05-15 01:04:38.439645] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439650] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.281 [2024-05-15 01:04:38.439662] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439668] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439672] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.281 [2024-05-15 01:04:38.439679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.281 [2024-05-15 01:04:38.439699] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.281 [2024-05-15 01:04:38.439758] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.281 [2024-05-15 01:04:38.439765] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.281 [2024-05-15 01:04:38.439769] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439773] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.281 [2024-05-15 01:04:38.439784] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439789] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439793] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.281 [2024-05-15 01:04:38.439801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.281 [2024-05-15 01:04:38.439819] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.281 [2024-05-15 01:04:38.439875] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.281 [2024-05-15 01:04:38.439882] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.281 [2024-05-15 01:04:38.439886] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439890] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.281 [2024-05-15 01:04:38.439902] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439907] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.439911] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.281 [2024-05-15 01:04:38.439918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.281 [2024-05-15 01:04:38.439936] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.281 [2024-05-15 01:04:38.439991] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.281 [2024-05-15 01:04:38.439998] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.281 [2024-05-15 01:04:38.440002] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.440006] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.281 [2024-05-15 01:04:38.440018] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.281 [2024-05-15 01:04:38.440023] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440027] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.440034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.440052] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.440104] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.440111] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.440115] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440119] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.440131] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440136] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440140] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.440148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.440165] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.440224] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.440241] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.440246] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440251] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.440263] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440269] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440273] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.440280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.440300] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.440358] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.440365] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.440369] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440374] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.440385] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440390] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440394] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.440402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.440420] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.440473] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.440480] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.440484] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440488] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.440499] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440504] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440508] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.440516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.440534] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.440590] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.440611] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.440616] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440620] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.440633] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440638] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440642] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.440650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.440670] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.440725] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.440733] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.440736] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440741] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.440752] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440761] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.440769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.440787] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.440842] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.440854] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.440858] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440863] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.440875] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440880] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440884] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.440892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.440911] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.440970] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.440977] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.440981] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.440985] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.440997] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441002] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441006] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.441014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.441032] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.441090] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.441097] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.441102] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441106] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.441117] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441122] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441126] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.441134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.441152] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.441213] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.441220] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.441224] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441228] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.441239] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441244] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441248] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.441256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.441274] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.441333] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.441340] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.441344] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441348] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.441360] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441365] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441369] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.441377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.441395] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.441449] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.441457] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.441461] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441465] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.441476] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441481] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441485] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.441493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.441511] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.441563] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.441570] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.441574] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441578] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.441590] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441606] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441611] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.441619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.441639] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.441702] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.441709] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.441713] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441718] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.282 [2024-05-15 01:04:38.441729] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441734] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441738] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.282 [2024-05-15 01:04:38.441746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.282 [2024-05-15 01:04:38.441764] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.282 [2024-05-15 01:04:38.441818] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.282 [2024-05-15 01:04:38.441825] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.282 [2024-05-15 01:04:38.441829] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.282 [2024-05-15 01:04:38.441833] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.283 [2024-05-15 01:04:38.441845] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.441850] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.441854] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.283 [2024-05-15 01:04:38.441861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.283 [2024-05-15 01:04:38.441879] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.283 [2024-05-15 01:04:38.441934] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.283 [2024-05-15 01:04:38.441941] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.283 [2024-05-15 01:04:38.441945] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.441949] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.283 [2024-05-15 01:04:38.441960] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.441965] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.441969] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.283 [2024-05-15 01:04:38.441977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.283 [2024-05-15 01:04:38.441995] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.283 [2024-05-15 01:04:38.442047] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.283 [2024-05-15 01:04:38.442054] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.283 [2024-05-15 01:04:38.442058] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442062] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.283 [2024-05-15 01:04:38.442074] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442079] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442083] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.283 [2024-05-15 01:04:38.442090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.283 [2024-05-15 01:04:38.442108] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.283 [2024-05-15 01:04:38.442161] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.283 [2024-05-15 01:04:38.442168] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.283 [2024-05-15 01:04:38.442172] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442176] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.283 [2024-05-15 01:04:38.442188] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442193] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442197] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.283 [2024-05-15 01:04:38.442205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.283 [2024-05-15 01:04:38.442223] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.283 [2024-05-15 01:04:38.442279] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.283 [2024-05-15 01:04:38.442290] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.283 [2024-05-15 01:04:38.442295] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442299] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.283 [2024-05-15 01:04:38.442312] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442317] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442321] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.283 [2024-05-15 01:04:38.442328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.283 [2024-05-15 01:04:38.442348] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.283 [2024-05-15 01:04:38.442403] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.283 [2024-05-15 01:04:38.442410] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.283 [2024-05-15 01:04:38.442414] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442419] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.283 [2024-05-15 01:04:38.442430] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442435] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442439] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.283 [2024-05-15 01:04:38.442447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.283 [2024-05-15 01:04:38.442465] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.283 [2024-05-15 01:04:38.442517] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.283 [2024-05-15 01:04:38.442525] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.283 [2024-05-15 01:04:38.442529] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442533] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.283 [2024-05-15 01:04:38.442545] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442550] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.442554] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.283 [2024-05-15 01:04:38.442561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.283 [2024-05-15 01:04:38.442579] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.283 [2024-05-15 01:04:38.446614] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.283 [2024-05-15 01:04:38.446633] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.283 [2024-05-15 01:04:38.446639] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.446643] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.283 [2024-05-15 01:04:38.446658] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.446664] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.446668] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548580) 00:38:35.283 [2024-05-15 01:04:38.446676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:35.283 [2024-05-15 01:04:38.446701] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594ae0, cid 3, qid 0 00:38:35.283 [2024-05-15 01:04:38.446759] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:35.283 [2024-05-15 01:04:38.446766] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:35.283 [2024-05-15 01:04:38.446770] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:35.283 [2024-05-15 01:04:38.446775] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1594ae0) on tqpair=0x1548580 00:38:35.283 [2024-05-15 01:04:38.446785] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:38:35.283 0 Kelvin (-273 Celsius) 00:38:35.283 Available Spare: 0% 00:38:35.283 Available Spare Threshold: 0% 00:38:35.283 Life Percentage Used: 0% 00:38:35.283 Data Units Read: 0 00:38:35.283 Data Units Written: 0 00:38:35.283 Host Read Commands: 0 00:38:35.283 Host Write Commands: 0 00:38:35.283 Controller Busy Time: 0 minutes 00:38:35.283 Power Cycles: 0 00:38:35.283 Power On Hours: 0 hours 00:38:35.283 Unsafe Shutdowns: 0 00:38:35.283 Unrecoverable Media Errors: 0 00:38:35.283 Lifetime Error Log Entries: 0 00:38:35.283 Warning Temperature Time: 0 minutes 00:38:35.283 Critical Temperature Time: 0 minutes 00:38:35.283 00:38:35.283 Number of Queues 00:38:35.283 ================ 00:38:35.283 Number of I/O Submission Queues: 127 00:38:35.283 Number of I/O Completion Queues: 127 00:38:35.283 00:38:35.283 Active Namespaces 00:38:35.283 ================= 00:38:35.283 Namespace ID:1 00:38:35.283 Error Recovery Timeout: Unlimited 00:38:35.283 Command Set Identifier: NVM (00h) 00:38:35.283 Deallocate: Supported 00:38:35.283 Deallocated/Unwritten Error: Not Supported 00:38:35.283 Deallocated Read Value: Unknown 00:38:35.283 Deallocate in Write Zeroes: Not Supported 00:38:35.283 Deallocated Guard Field: 0xFFFF 00:38:35.283 Flush: Supported 00:38:35.283 Reservation: Supported 00:38:35.283 Namespace Sharing Capabilities: Multiple Controllers 00:38:35.283 Size (in LBAs): 131072 (0GiB) 00:38:35.283 Capacity (in LBAs): 131072 (0GiB) 00:38:35.283 Utilization (in LBAs): 131072 (0GiB) 00:38:35.283 NGUID: ABCDEF0123456789ABCDEF0123456789 00:38:35.283 EUI64: ABCDEF0123456789 00:38:35.283 UUID: 47594db8-ba26-46cb-be9b-3f9ec9fe6fba 00:38:35.283 Thin Provisioning: Not Supported 00:38:35.283 Per-NS Atomic Units: Yes 00:38:35.283 Atomic Boundary Size (Normal): 0 00:38:35.283 Atomic Boundary Size (PFail): 0 00:38:35.283 Atomic Boundary Offset: 0 00:38:35.283 Maximum Single Source Range Length: 65535 00:38:35.283 Maximum Copy Length: 65535 00:38:35.283 Maximum Source Range Count: 1 00:38:35.283 NGUID/EUI64 Never Reused: No 00:38:35.283 Namespace Write Protected: No 00:38:35.283 Number of LBA Formats: 1 00:38:35.283 Current LBA Format: LBA Format #00 00:38:35.283 LBA Format #00: Data Size: 512 Metadata Size: 0 00:38:35.283 00:38:35.283 01:04:38 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:38:35.283 01:04:38 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:35.283 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.283 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:35.283 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.283 01:04:38 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:38:35.283 01:04:38 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:38:35.283 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:35.283 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:38:35.283 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:35.283 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:38:35.283 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:35.283 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:35.283 rmmod nvme_tcp 00:38:35.283 rmmod nvme_fabrics 00:38:35.542 rmmod nvme_keyring 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 104406 ']' 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 104406 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' -z 104406 ']' 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # kill -0 104406 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # uname 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 104406 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:38:35.542 killing process with pid 104406 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # echo 'killing process with pid 104406' 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # kill 104406 00:38:35.542 [2024-05-15 01:04:38.618640] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:38:35.542 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@971 -- # wait 104406 00:38:35.801 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:35.801 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:35.801 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:35.801 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:35.801 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:35.801 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:35.801 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:35.801 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:35.801 01:04:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:38:35.801 00:38:35.801 real 0m2.651s 00:38:35.801 user 0m7.234s 00:38:35.801 sys 0m0.698s 00:38:35.801 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:38:35.801 01:04:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:35.801 ************************************ 00:38:35.801 END TEST nvmf_identify 00:38:35.801 ************************************ 00:38:35.801 01:04:38 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:38:35.801 01:04:38 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:38:35.801 01:04:38 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:38:35.801 01:04:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:35.801 ************************************ 00:38:35.801 START TEST nvmf_perf 00:38:35.801 ************************************ 00:38:35.801 01:04:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:38:35.801 * Looking for test storage... 00:38:35.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:35.801 01:04:39 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:38:35.802 Cannot find device "nvmf_tgt_br" 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:38:35.802 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:38:36.060 Cannot find device "nvmf_tgt_br2" 00:38:36.060 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:38:36.060 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:38:36.060 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:38:36.060 Cannot find device "nvmf_tgt_br" 00:38:36.060 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:38:36.060 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:38:36.060 Cannot find device "nvmf_tgt_br2" 00:38:36.060 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:38:36.060 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:38:36.060 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:38:36.060 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:36.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:36.060 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:36.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:36.061 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:38:36.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:36.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:38:36.319 00:38:36.319 --- 10.0.0.2 ping statistics --- 00:38:36.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:36.319 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:38:36.319 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:36.319 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:38:36.319 00:38:36.319 --- 10.0.0.3 ping statistics --- 00:38:36.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:36.319 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:36.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:36.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:38:36.319 00:38:36.319 --- 10.0.0.1 ping statistics --- 00:38:36.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:36.319 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:36.319 01:04:39 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:38:36.320 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:36.320 01:04:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@721 -- # xtrace_disable 00:38:36.320 01:04:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:38:36.320 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=104636 00:38:36.320 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:36.320 01:04:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 104636 00:38:36.320 01:04:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # '[' -z 104636 ']' 00:38:36.320 01:04:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:36.320 01:04:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local max_retries=100 00:38:36.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:36.320 01:04:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:36.320 01:04:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # xtrace_disable 00:38:36.320 01:04:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:38:36.320 [2024-05-15 01:04:39.462690] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:38:36.320 [2024-05-15 01:04:39.462793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:36.320 [2024-05-15 01:04:39.603216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:36.578 [2024-05-15 01:04:39.690284] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:36.578 [2024-05-15 01:04:39.690350] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:36.578 [2024-05-15 01:04:39.690362] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:36.578 [2024-05-15 01:04:39.690370] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:36.578 [2024-05-15 01:04:39.690378] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:36.578 [2024-05-15 01:04:39.691041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:36.578 [2024-05-15 01:04:39.691135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:36.578 [2024-05-15 01:04:39.691195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:36.578 [2024-05-15 01:04:39.691199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.512 01:04:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:38:37.512 01:04:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@861 -- # return 0 00:38:37.512 01:04:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:37.512 01:04:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@727 -- # xtrace_disable 00:38:37.512 01:04:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:38:37.512 01:04:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:37.512 01:04:40 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:38:37.512 01:04:40 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:38:37.771 01:04:40 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:38:37.771 01:04:40 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:38:38.029 01:04:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:38:38.029 01:04:41 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:38.287 01:04:41 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:38:38.287 01:04:41 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:38:38.287 01:04:41 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:38:38.287 01:04:41 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:38:38.287 01:04:41 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:38:38.545 [2024-05-15 01:04:41.672904] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:38.545 01:04:41 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:38.804 01:04:41 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:38:38.804 01:04:41 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:39.062 01:04:42 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:38:39.062 01:04:42 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:39.320 01:04:42 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:39.577 [2024-05-15 01:04:42.621891] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:38:39.577 [2024-05-15 01:04:42.622403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:39.577 01:04:42 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:39.577 01:04:42 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:38:39.577 01:04:42 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:38:39.577 01:04:42 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:38:39.577 01:04:42 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:38:40.957 Initializing NVMe Controllers 00:38:40.957 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:38:40.957 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:38:40.957 Initialization complete. Launching workers. 00:38:40.957 ======================================================== 00:38:40.957 Latency(us) 00:38:40.957 Device Information : IOPS MiB/s Average min max 00:38:40.957 PCIE (0000:00:10.0) NSID 1 from core 0: 25187.25 98.39 1269.81 339.05 6612.30 00:38:40.957 ======================================================== 00:38:40.957 Total : 25187.25 98.39 1269.81 339.05 6612.30 00:38:40.957 00:38:40.957 01:04:43 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:42.334 Initializing NVMe Controllers 00:38:42.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:42.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:42.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:42.334 Initialization complete. Launching workers. 00:38:42.334 ======================================================== 00:38:42.334 Latency(us) 00:38:42.334 Device Information : IOPS MiB/s Average min max 00:38:42.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3585.95 14.01 278.54 108.58 4240.26 00:38:42.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8120.74 6026.53 12038.04 00:38:42.334 ======================================================== 00:38:42.334 Total : 3709.94 14.49 540.65 108.58 12038.04 00:38:42.334 00:38:42.334 01:04:45 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:43.713 Initializing NVMe Controllers 00:38:43.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:43.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:43.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:43.713 Initialization complete. Launching workers. 00:38:43.713 ======================================================== 00:38:43.713 Latency(us) 00:38:43.713 Device Information : IOPS MiB/s Average min max 00:38:43.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8772.99 34.27 3648.14 747.44 7585.00 00:38:43.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2700.38 10.55 11958.79 7340.92 20949.07 00:38:43.713 ======================================================== 00:38:43.713 Total : 11473.37 44.82 5604.14 747.44 20949.07 00:38:43.713 00:38:43.713 01:04:46 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:38:43.713 01:04:46 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:46.246 Initializing NVMe Controllers 00:38:46.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:46.246 Controller IO queue size 128, less than required. 00:38:46.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:46.246 Controller IO queue size 128, less than required. 00:38:46.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:46.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:46.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:46.246 Initialization complete. Launching workers. 00:38:46.246 ======================================================== 00:38:46.246 Latency(us) 00:38:46.246 Device Information : IOPS MiB/s Average min max 00:38:46.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1531.85 382.96 85276.13 55114.20 156672.41 00:38:46.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 547.09 136.77 242288.66 105828.26 349303.07 00:38:46.246 ======================================================== 00:38:46.246 Total : 2078.95 519.74 126595.21 55114.20 349303.07 00:38:46.246 00:38:46.246 01:04:49 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:38:46.246 Initializing NVMe Controllers 00:38:46.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:46.246 Controller IO queue size 128, less than required. 00:38:46.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:46.246 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:38:46.246 Controller IO queue size 128, less than required. 00:38:46.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:46.246 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:38:46.246 WARNING: Some requested NVMe devices were skipped 00:38:46.246 No valid NVMe controllers or AIO or URING devices found 00:38:46.246 01:04:49 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:38:48.778 Initializing NVMe Controllers 00:38:48.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:48.778 Controller IO queue size 128, less than required. 00:38:48.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:48.778 Controller IO queue size 128, less than required. 00:38:48.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:48.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:48.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:48.778 Initialization complete. Launching workers. 00:38:48.778 00:38:48.778 ==================== 00:38:48.778 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:38:48.778 TCP transport: 00:38:48.778 polls: 11968 00:38:48.778 idle_polls: 8307 00:38:48.778 sock_completions: 3661 00:38:48.778 nvme_completions: 4615 00:38:48.778 submitted_requests: 6944 00:38:48.778 queued_requests: 1 00:38:48.778 00:38:48.778 ==================== 00:38:48.778 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:38:48.778 TCP transport: 00:38:48.778 polls: 8037 00:38:48.778 idle_polls: 4636 00:38:48.778 sock_completions: 3401 00:38:48.778 nvme_completions: 6363 00:38:48.778 submitted_requests: 9464 00:38:48.778 queued_requests: 1 00:38:48.778 ======================================================== 00:38:48.778 Latency(us) 00:38:48.778 Device Information : IOPS MiB/s Average min max 00:38:48.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1153.45 288.36 112738.56 69676.90 184473.86 00:38:48.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1590.43 397.61 81930.78 41890.50 140132.05 00:38:48.778 ======================================================== 00:38:48.778 Total : 2743.88 685.97 94881.50 41890.50 184473.86 00:38:48.778 00:38:48.778 01:04:51 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:38:48.778 01:04:52 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:49.345 01:04:52 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:38:49.345 01:04:52 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:38:49.345 01:04:52 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:38:49.345 01:04:52 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=c81a6fd3-0f1b-48ae-acb6-f4483386ac75 00:38:49.345 01:04:52 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb c81a6fd3-0f1b-48ae-acb6-f4483386ac75 00:38:49.345 01:04:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_uuid=c81a6fd3-0f1b-48ae-acb6-f4483386ac75 00:38:49.345 01:04:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_info 00:38:49.345 01:04:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local fc 00:38:49.345 01:04:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local cs 00:38:49.345 01:04:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:49.604 01:04:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:38:49.604 { 00:38:49.604 "base_bdev": "Nvme0n1", 00:38:49.604 "block_size": 4096, 00:38:49.604 "cluster_size": 4194304, 00:38:49.604 "free_clusters": 1278, 00:38:49.604 "name": "lvs_0", 00:38:49.604 "total_data_clusters": 1278, 00:38:49.604 "uuid": "c81a6fd3-0f1b-48ae-acb6-f4483386ac75" 00:38:49.604 } 00:38:49.604 ]' 00:38:49.604 01:04:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="c81a6fd3-0f1b-48ae-acb6-f4483386ac75") .free_clusters' 00:38:49.604 01:04:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # fc=1278 00:38:49.604 01:04:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="c81a6fd3-0f1b-48ae-acb6-f4483386ac75") .cluster_size' 00:38:49.863 01:04:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # cs=4194304 00:38:49.863 01:04:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # free_mb=5112 00:38:49.863 5112 00:38:49.863 01:04:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1371 -- # echo 5112 00:38:49.863 01:04:52 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:38:49.863 01:04:52 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c81a6fd3-0f1b-48ae-acb6-f4483386ac75 lbd_0 5112 00:38:49.863 01:04:53 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=4816647d-d472-4ffc-ba2d-1bee85a5c001 00:38:49.863 01:04:53 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4816647d-d472-4ffc-ba2d-1bee85a5c001 lvs_n_0 00:38:50.430 01:04:53 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=78f74c2c-4194-406c-b2c8-68a0fd8f0422 00:38:50.430 01:04:53 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 78f74c2c-4194-406c-b2c8-68a0fd8f0422 00:38:50.430 01:04:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_uuid=78f74c2c-4194-406c-b2c8-68a0fd8f0422 00:38:50.430 01:04:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_info 00:38:50.430 01:04:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local fc 00:38:50.430 01:04:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local cs 00:38:50.430 01:04:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:50.687 01:04:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:38:50.687 { 00:38:50.687 "base_bdev": "Nvme0n1", 00:38:50.687 "block_size": 4096, 00:38:50.687 "cluster_size": 4194304, 00:38:50.687 "free_clusters": 0, 00:38:50.687 "name": "lvs_0", 00:38:50.687 "total_data_clusters": 1278, 00:38:50.687 "uuid": "c81a6fd3-0f1b-48ae-acb6-f4483386ac75" 00:38:50.687 }, 00:38:50.687 { 00:38:50.687 "base_bdev": "4816647d-d472-4ffc-ba2d-1bee85a5c001", 00:38:50.687 "block_size": 4096, 00:38:50.687 "cluster_size": 4194304, 00:38:50.687 "free_clusters": 1276, 00:38:50.687 "name": "lvs_n_0", 00:38:50.687 "total_data_clusters": 1276, 00:38:50.687 "uuid": "78f74c2c-4194-406c-b2c8-68a0fd8f0422" 00:38:50.687 } 00:38:50.687 ]' 00:38:50.687 01:04:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="78f74c2c-4194-406c-b2c8-68a0fd8f0422") .free_clusters' 00:38:50.687 01:04:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # fc=1276 00:38:50.687 01:04:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="78f74c2c-4194-406c-b2c8-68a0fd8f0422") .cluster_size' 00:38:50.687 01:04:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # cs=4194304 00:38:50.687 01:04:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # free_mb=5104 00:38:50.687 5104 00:38:50.687 01:04:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1371 -- # echo 5104 00:38:50.687 01:04:53 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:38:50.687 01:04:53 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 78f74c2c-4194-406c-b2c8-68a0fd8f0422 lbd_nest_0 5104 00:38:50.945 01:04:54 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=9833df79-2153-410b-99ee-68c892ff4737 00:38:50.945 01:04:54 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:51.204 01:04:54 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:38:51.204 01:04:54 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 9833df79-2153-410b-99ee-68c892ff4737 00:38:51.462 01:04:54 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:51.720 01:04:54 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:38:51.720 01:04:54 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:38:51.720 01:04:54 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:38:51.720 01:04:54 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:38:51.720 01:04:54 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:51.978 Initializing NVMe Controllers 00:38:51.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:51.978 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:38:51.978 WARNING: Some requested NVMe devices were skipped 00:38:51.978 No valid NVMe controllers or AIO or URING devices found 00:38:51.978 01:04:55 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:38:51.978 01:04:55 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:04.224 Initializing NVMe Controllers 00:39:04.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:04.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:04.224 Initialization complete. Launching workers. 00:39:04.224 ======================================================== 00:39:04.224 Latency(us) 00:39:04.225 Device Information : IOPS MiB/s Average min max 00:39:04.225 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 981.81 122.73 1018.11 341.30 7597.16 00:39:04.225 ======================================================== 00:39:04.225 Total : 981.81 122.73 1018.11 341.30 7597.16 00:39:04.225 00:39:04.225 01:05:05 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:39:04.225 01:05:05 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:39:04.225 01:05:05 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:04.225 Initializing NVMe Controllers 00:39:04.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:04.225 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:39:04.225 WARNING: Some requested NVMe devices were skipped 00:39:04.225 No valid NVMe controllers or AIO or URING devices found 00:39:04.225 01:05:05 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:39:04.225 01:05:05 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:14.195 Initializing NVMe Controllers 00:39:14.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:14.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:14.195 Initialization complete. Launching workers. 00:39:14.195 ======================================================== 00:39:14.195 Latency(us) 00:39:14.195 Device Information : IOPS MiB/s Average min max 00:39:14.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1061.90 132.74 30177.04 5280.98 448332.68 00:39:14.195 ======================================================== 00:39:14.195 Total : 1061.90 132.74 30177.04 5280.98 448332.68 00:39:14.195 00:39:14.195 01:05:16 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:39:14.195 01:05:16 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:39:14.195 01:05:16 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:14.195 Initializing NVMe Controllers 00:39:14.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:14.195 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:39:14.195 WARNING: Some requested NVMe devices were skipped 00:39:14.195 No valid NVMe controllers or AIO or URING devices found 00:39:14.195 01:05:16 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:39:14.195 01:05:16 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:24.166 Initializing NVMe Controllers 00:39:24.166 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:24.166 Controller IO queue size 128, less than required. 00:39:24.166 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:24.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:24.166 Initialization complete. Launching workers. 00:39:24.166 ======================================================== 00:39:24.166 Latency(us) 00:39:24.166 Device Information : IOPS MiB/s Average min max 00:39:24.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3893.95 486.74 32913.24 11234.20 82531.24 00:39:24.166 ======================================================== 00:39:24.166 Total : 3893.95 486.74 32913.24 11234.20 82531.24 00:39:24.166 00:39:24.166 01:05:26 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:24.166 01:05:26 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9833df79-2153-410b-99ee-68c892ff4737 00:39:24.166 01:05:27 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:39:24.425 01:05:27 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4816647d-d472-4ffc-ba2d-1bee85a5c001 00:39:24.684 01:05:27 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:24.943 rmmod nvme_tcp 00:39:24.943 rmmod nvme_fabrics 00:39:24.943 rmmod nvme_keyring 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 104636 ']' 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 104636 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' -z 104636 ']' 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # kill -0 104636 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # uname 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 104636 00:39:24.943 killing process with pid 104636 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 104636' 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # kill 104636 00:39:24.943 [2024-05-15 01:05:28.175386] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:39:24.943 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@971 -- # wait 104636 00:39:25.509 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:25.509 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:25.509 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:25.509 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:25.509 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:25.509 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:25.509 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:25.509 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.509 01:05:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:39:25.509 ************************************ 00:39:25.509 END TEST nvmf_perf 00:39:25.509 ************************************ 00:39:25.509 00:39:25.509 real 0m49.766s 00:39:25.509 user 3m7.879s 00:39:25.509 sys 0m10.996s 00:39:25.509 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:39:25.509 01:05:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:39:25.509 01:05:28 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:39:25.509 01:05:28 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:39:25.509 01:05:28 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:39:25.509 01:05:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:25.509 ************************************ 00:39:25.509 START TEST nvmf_fio_host 00:39:25.509 ************************************ 00:39:25.509 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:39:25.768 * Looking for test storage... 00:39:25.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:25.768 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:39:25.769 Cannot find device "nvmf_tgt_br" 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:39:25.769 Cannot find device "nvmf_tgt_br2" 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:39:25.769 Cannot find device "nvmf_tgt_br" 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:39:25.769 Cannot find device "nvmf_tgt_br2" 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:25.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:25.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:39:25.769 01:05:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:25.769 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:25.769 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:25.769 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:25.769 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:25.769 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:39:26.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:26.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:39:26.028 00:39:26.028 --- 10.0.0.2 ping statistics --- 00:39:26.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:26.028 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:39:26.028 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:26.028 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:39:26.028 00:39:26.028 --- 10.0.0.3 ping statistics --- 00:39:26.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:26.028 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:26.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:26.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:39:26.028 00:39:26.028 --- 10.0.0.1 ping statistics --- 00:39:26.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:26.028 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:39:26.028 01:05:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:39:26.029 01:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:39:26.029 01:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.029 01:05:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=105589 00:39:26.029 01:05:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:39:26.029 01:05:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:26.029 01:05:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 105589 00:39:26.029 01:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # '[' -z 105589 ']' 00:39:26.029 01:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:26.029 01:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:26.029 01:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:26.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:26.029 01:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:26.029 01:05:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.029 [2024-05-15 01:05:29.256395] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:39:26.029 [2024-05-15 01:05:29.256499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:26.287 [2024-05-15 01:05:29.393278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:26.287 [2024-05-15 01:05:29.481422] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:26.287 [2024-05-15 01:05:29.481493] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:26.287 [2024-05-15 01:05:29.481521] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:26.287 [2024-05-15 01:05:29.481529] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:26.287 [2024-05-15 01:05:29.481536] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:26.287 [2024-05-15 01:05:29.481695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:26.287 [2024-05-15 01:05:29.481844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:26.287 [2024-05-15 01:05:29.482528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.287 [2024-05-15 01:05:29.482498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:39:27.222 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:27.222 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@861 -- # return 0 00:39:27.222 01:05:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:27.222 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.222 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.222 [2024-05-15 01:05:30.218407] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:27.222 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.222 01:05:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:39:27.222 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:39:27.222 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.222 01:05:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.223 Malloc1 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.223 [2024-05-15 01:05:30.327410] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:39:27.223 [2024-05-15 01:05:30.327700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:39:27.223 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:39:27.223 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:39:27.223 fio-3.35 00:39:27.223 Starting 1 thread 00:39:29.754 00:39:29.754 test: (groupid=0, jobs=1): err= 0: pid=105662: Wed May 15 01:05:32 2024 00:39:29.754 read: IOPS=8841, BW=34.5MiB/s (36.2MB/s)(69.3MiB/2007msec) 00:39:29.754 slat (usec): min=2, max=242, avg= 2.66, stdev= 2.25 00:39:29.754 clat (usec): min=2382, max=16643, avg=7539.18, stdev=918.25 00:39:29.754 lat (usec): min=2416, max=16645, avg=7541.84, stdev=918.18 00:39:29.754 clat percentiles (usec): 00:39:29.754 | 1.00th=[ 6325], 5.00th=[ 6652], 10.00th=[ 6783], 20.00th=[ 6980], 00:39:29.754 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:39:29.754 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8291], 95.00th=[ 9372], 00:39:29.754 | 99.00th=[10683], 99.50th=[12256], 99.90th=[15533], 99.95th=[16319], 00:39:29.754 | 99.99th=[16581] 00:39:29.754 bw ( KiB/s): min=31888, max=36952, per=99.99%, avg=35364.00, stdev=2338.14, samples=4 00:39:29.754 iops : min= 7972, max= 9238, avg=8841.00, stdev=584.54, samples=4 00:39:29.754 write: IOPS=8858, BW=34.6MiB/s (36.3MB/s)(69.4MiB/2007msec); 0 zone resets 00:39:29.754 slat (usec): min=2, max=149, avg= 2.79, stdev= 1.38 00:39:29.754 clat (usec): min=1417, max=16213, avg=6854.23, stdev=844.22 00:39:29.754 lat (usec): min=1426, max=16216, avg=6857.02, stdev=844.23 00:39:29.754 clat percentiles (usec): 00:39:29.754 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6390], 00:39:29.754 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6849], 00:39:29.754 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7439], 95.00th=[ 8586], 00:39:29.754 | 99.00th=[ 9634], 99.50th=[10945], 99.90th=[14484], 99.95th=[14877], 00:39:29.754 | 99.99th=[16188] 00:39:29.754 bw ( KiB/s): min=32656, max=36936, per=99.99%, avg=35430.00, stdev=1903.28, samples=4 00:39:29.754 iops : min= 8164, max= 9234, avg=8857.50, stdev=475.82, samples=4 00:39:29.754 lat (msec) : 2=0.03%, 4=0.13%, 10=98.19%, 20=1.64% 00:39:29.754 cpu : usr=67.35%, sys=23.58%, ctx=6, majf=0, minf=5 00:39:29.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:39:29.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:29.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:29.754 issued rwts: total=17745,17779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:29.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:29.754 00:39:29.754 Run status group 0 (all jobs): 00:39:29.754 READ: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.3MiB (72.7MB), run=2007-2007msec 00:39:29.754 WRITE: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=69.4MiB (72.8MB), run=2007-2007msec 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:39:29.754 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:39:29.755 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:39:29.755 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:39:29.755 01:05:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:39:29.755 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:39:29.755 fio-3.35 00:39:29.755 Starting 1 thread 00:39:32.283 00:39:32.283 test: (groupid=0, jobs=1): err= 0: pid=105711: Wed May 15 01:05:35 2024 00:39:32.283 read: IOPS=8155, BW=127MiB/s (134MB/s)(255MiB/2005msec) 00:39:32.283 slat (usec): min=3, max=115, avg= 3.85, stdev= 1.79 00:39:32.283 clat (usec): min=2743, max=17815, avg=9322.83, stdev=2305.27 00:39:32.283 lat (usec): min=2746, max=17818, avg=9326.68, stdev=2305.26 00:39:32.283 clat percentiles (usec): 00:39:32.283 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7177], 00:39:32.283 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10028], 00:39:32.283 | 70.00th=[10683], 80.00th=[11338], 90.00th=[11863], 95.00th=[13042], 00:39:32.283 | 99.00th=[15401], 99.50th=[16319], 99.90th=[17433], 99.95th=[17433], 00:39:32.283 | 99.99th=[17695] 00:39:32.283 bw ( KiB/s): min=57728, max=74816, per=50.42%, avg=65784.00, stdev=8254.53, samples=4 00:39:32.283 iops : min= 3608, max= 4676, avg=4111.50, stdev=515.91, samples=4 00:39:32.283 write: IOPS=4775, BW=74.6MiB/s (78.2MB/s)(135MiB/1810msec); 0 zone resets 00:39:32.283 slat (usec): min=36, max=353, avg=38.05, stdev= 6.53 00:39:32.283 clat (usec): min=4125, max=18787, avg=11221.05, stdev=1965.29 00:39:32.283 lat (usec): min=4162, max=18824, avg=11259.10, stdev=1964.94 00:39:32.283 clat percentiles (usec): 00:39:32.283 | 1.00th=[ 7373], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9503], 00:39:32.283 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11469], 00:39:32.283 | 70.00th=[11994], 80.00th=[12780], 90.00th=[13960], 95.00th=[14877], 00:39:32.283 | 99.00th=[16712], 99.50th=[17171], 99.90th=[17695], 99.95th=[17957], 00:39:32.283 | 99.99th=[18744] 00:39:32.283 bw ( KiB/s): min=59328, max=78176, per=89.76%, avg=68576.00, stdev=9370.08, samples=4 00:39:32.283 iops : min= 3708, max= 4886, avg=4286.00, stdev=585.63, samples=4 00:39:32.283 lat (msec) : 4=0.18%, 10=49.22%, 20=50.60% 00:39:32.283 cpu : usr=71.61%, sys=18.66%, ctx=28, majf=0, minf=1 00:39:32.283 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:39:32.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:32.283 issued rwts: total=16351,8643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:32.283 00:39:32.283 Run status group 0 (all jobs): 00:39:32.283 READ: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=255MiB (268MB), run=2005-2005msec 00:39:32.283 WRITE: bw=74.6MiB/s (78.2MB/s), 74.6MiB/s-74.6MiB/s (78.2MB/s-78.2MB/s), io=135MiB (142MB), run=1810-1810msec 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # get_nvme_bdfs 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=() 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # local bdfs 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.283 Nvme0n1 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # ls_guid=01526402-05d6-42b2-9a92-7416202f7236 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # get_lvs_free_mb 01526402-05d6-42b2-9a92-7416202f7236 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_uuid=01526402-05d6-42b2-9a92-7416202f7236 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_info 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local fc 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local cs 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # rpc_cmd bdev_lvol_get_lvstores 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.283 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:39:32.283 { 00:39:32.283 "base_bdev": "Nvme0n1", 00:39:32.283 "block_size": 4096, 00:39:32.283 "cluster_size": 1073741824, 00:39:32.283 "free_clusters": 4, 00:39:32.284 "name": "lvs_0", 00:39:32.284 "total_data_clusters": 4, 00:39:32.284 "uuid": "01526402-05d6-42b2-9a92-7416202f7236" 00:39:32.284 } 00:39:32.284 ]' 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="01526402-05d6-42b2-9a92-7416202f7236") .free_clusters' 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # fc=4 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="01526402-05d6-42b2-9a92-7416202f7236") .cluster_size' 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # cs=1073741824 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # free_mb=4096 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1371 -- # echo 4096 00:39:32.284 4096 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.284 228627c4-16dc-414c-9e0b-c979d30d1a7f 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.284 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:39:32.541 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:39:32.541 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:39:32.541 fio-3.35 00:39:32.541 Starting 1 thread 00:39:35.067 00:39:35.067 test: (groupid=0, jobs=1): err= 0: pid=105784: Wed May 15 01:05:38 2024 00:39:35.068 read: IOPS=6480, BW=25.3MiB/s (26.5MB/s)(50.9MiB/2009msec) 00:39:35.068 slat (usec): min=2, max=340, avg= 2.69, stdev= 3.87 00:39:35.068 clat (usec): min=4050, max=17337, avg=10350.26, stdev=894.01 00:39:35.068 lat (usec): min=4060, max=17339, avg=10352.94, stdev=893.81 00:39:35.068 clat percentiles (usec): 00:39:35.068 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:39:35.068 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:39:35.068 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:39:35.068 | 99.00th=[12518], 99.50th=[12911], 99.90th=[15139], 99.95th=[17171], 00:39:35.068 | 99.99th=[17433] 00:39:35.068 bw ( KiB/s): min=24760, max=26864, per=99.94%, avg=25908.00, stdev=924.46, samples=4 00:39:35.068 iops : min= 6190, max= 6716, avg=6477.00, stdev=231.12, samples=4 00:39:35.068 write: IOPS=6488, BW=25.3MiB/s (26.6MB/s)(50.9MiB/2009msec); 0 zone resets 00:39:35.068 slat (usec): min=2, max=297, avg= 2.81, stdev= 2.89 00:39:35.068 clat (usec): min=2391, max=17360, avg=9302.52, stdev=839.41 00:39:35.068 lat (usec): min=2403, max=17363, avg=9305.33, stdev=839.27 00:39:35.068 clat percentiles (usec): 00:39:35.068 | 1.00th=[ 7504], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8717], 00:39:35.068 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:39:35.068 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:39:35.068 | 99.00th=[11207], 99.50th=[11600], 99.90th=[15664], 99.95th=[16188], 00:39:35.068 | 99.99th=[17171] 00:39:35.068 bw ( KiB/s): min=25856, max=26104, per=99.98%, avg=25950.00, stdev=108.79, samples=4 00:39:35.068 iops : min= 6464, max= 6526, avg=6487.50, stdev=27.20, samples=4 00:39:35.068 lat (msec) : 4=0.04%, 10=58.86%, 20=41.10% 00:39:35.068 cpu : usr=68.73%, sys=23.90%, ctx=16, majf=0, minf=5 00:39:35.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:39:35.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:35.068 issued rwts: total=13020,13036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:35.068 00:39:35.068 Run status group 0 (all jobs): 00:39:35.068 READ: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=50.9MiB (53.3MB), run=2009-2009msec 00:39:35.068 WRITE: bw=25.3MiB/s (26.6MB/s), 25.3MiB/s-25.3MiB/s (26.6MB/s-26.6MB/s), io=50.9MiB (53.4MB), run=2009-2009msec 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # ls_nested_guid=7ab22496-12b1-4f94-9ff6-8d71a4852769 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@63 -- # get_lvs_free_mb 7ab22496-12b1-4f94-9ff6-8d71a4852769 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_uuid=7ab22496-12b1-4f94-9ff6-8d71a4852769 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_info 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local fc 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local cs 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # rpc_cmd bdev_lvol_get_lvstores 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:39:35.068 { 00:39:35.068 "base_bdev": "Nvme0n1", 00:39:35.068 "block_size": 4096, 00:39:35.068 "cluster_size": 1073741824, 00:39:35.068 "free_clusters": 0, 00:39:35.068 "name": "lvs_0", 00:39:35.068 "total_data_clusters": 4, 00:39:35.068 "uuid": "01526402-05d6-42b2-9a92-7416202f7236" 00:39:35.068 }, 00:39:35.068 { 00:39:35.068 "base_bdev": "228627c4-16dc-414c-9e0b-c979d30d1a7f", 00:39:35.068 "block_size": 4096, 00:39:35.068 "cluster_size": 4194304, 00:39:35.068 "free_clusters": 1022, 00:39:35.068 "name": "lvs_n_0", 00:39:35.068 "total_data_clusters": 1022, 00:39:35.068 "uuid": "7ab22496-12b1-4f94-9ff6-8d71a4852769" 00:39:35.068 } 00:39:35.068 ]' 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="7ab22496-12b1-4f94-9ff6-8d71a4852769") .free_clusters' 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # fc=1022 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="7ab22496-12b1-4f94-9ff6-8d71a4852769") .cluster_size' 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # cs=4194304 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # free_mb=4088 00:39:35.068 4088 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1371 -- # echo 4088 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.068 b62a43d4-35b7-4450-bf52-13872b961b92 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:39:35.068 01:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:39:35.068 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:39:35.068 fio-3.35 00:39:35.068 Starting 1 thread 00:39:37.603 00:39:37.603 test: (groupid=0, jobs=1): err= 0: pid=105839: Wed May 15 01:05:40 2024 00:39:37.603 read: IOPS=5767, BW=22.5MiB/s (23.6MB/s)(45.3MiB/2009msec) 00:39:37.603 slat (usec): min=2, max=294, avg= 2.71, stdev= 3.62 00:39:37.603 clat (usec): min=4464, max=21406, avg=11661.43, stdev=1029.65 00:39:37.603 lat (usec): min=4472, max=21409, avg=11664.14, stdev=1029.40 00:39:37.603 clat percentiles (usec): 00:39:37.603 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10552], 20.00th=[10814], 00:39:37.603 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:39:37.603 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12911], 95.00th=[13304], 00:39:37.603 | 99.00th=[14091], 99.50th=[14615], 99.90th=[19006], 99.95th=[20055], 00:39:37.603 | 99.99th=[21365] 00:39:37.603 bw ( KiB/s): min=22024, max=23552, per=99.83%, avg=23030.00, stdev=684.03, samples=4 00:39:37.603 iops : min= 5506, max= 5888, avg=5757.50, stdev=171.01, samples=4 00:39:37.603 write: IOPS=5751, BW=22.5MiB/s (23.6MB/s)(45.1MiB/2009msec); 0 zone resets 00:39:37.603 slat (usec): min=2, max=201, avg= 2.84, stdev= 2.18 00:39:37.603 clat (usec): min=2182, max=20103, avg=10467.61, stdev=957.06 00:39:37.603 lat (usec): min=2194, max=20106, avg=10470.46, stdev=956.91 00:39:37.603 clat percentiles (usec): 00:39:37.603 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:39:37.603 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:39:37.603 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:39:37.603 | 99.00th=[12518], 99.50th=[12911], 99.90th=[18220], 99.95th=[19792], 00:39:37.603 | 99.99th=[20055] 00:39:37.603 bw ( KiB/s): min=22592, max=23232, per=99.96%, avg=22998.00, stdev=287.81, samples=4 00:39:37.603 iops : min= 5648, max= 5808, avg=5749.50, stdev=71.95, samples=4 00:39:37.603 lat (msec) : 4=0.05%, 10=15.98%, 20=83.95%, 50=0.03% 00:39:37.603 cpu : usr=72.41%, sys=21.61%, ctx=18, majf=0, minf=5 00:39:37.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:39:37.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:37.603 issued rwts: total=11586,11555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:37.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:37.603 00:39:37.603 Run status group 0 (all jobs): 00:39:37.603 READ: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.3MiB (47.5MB), run=2009-2009msec 00:39:37.603 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.1MiB (47.3MB), run=2009-2009msec 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # sync 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.603 01:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:39:37.604 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.604 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.604 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.604 01:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:39:37.604 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.604 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.604 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.604 01:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:39:37.604 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.604 01:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.172 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.172 01:05:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:39:38.172 01:05:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:39:38.172 01:05:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:39:38.172 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:38.172 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:39:38.172 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:38.172 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:39:38.172 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:38.172 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:38.172 rmmod nvme_tcp 00:39:38.431 rmmod nvme_fabrics 00:39:38.431 rmmod nvme_keyring 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 105589 ']' 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 105589 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' -z 105589 ']' 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # kill -0 105589 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # uname 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 105589 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 105589' 00:39:38.431 killing process with pid 105589 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # kill 105589 00:39:38.431 [2024-05-15 01:05:41.521314] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:39:38.431 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@971 -- # wait 105589 00:39:38.691 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:38.691 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:38.691 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:38.691 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:38.691 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:38.691 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:38.691 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:38.691 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.691 01:05:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:39:38.691 00:39:38.691 real 0m13.016s 00:39:38.691 user 0m54.479s 00:39:38.691 sys 0m3.441s 00:39:38.691 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:39:38.691 ************************************ 00:39:38.691 END TEST nvmf_fio_host 00:39:38.691 ************************************ 00:39:38.691 01:05:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.691 01:05:41 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:39:38.691 01:05:41 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:39:38.691 01:05:41 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:39:38.691 01:05:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:38.691 ************************************ 00:39:38.691 START TEST nvmf_failover 00:39:38.691 ************************************ 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:39:38.691 * Looking for test storage... 00:39:38.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:38.691 01:05:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:39:38.692 Cannot find device "nvmf_tgt_br" 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:39:38.692 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:39:38.692 Cannot find device "nvmf_tgt_br2" 00:39:38.952 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:39:38.952 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:39:38.952 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:39:38.952 Cannot find device "nvmf_tgt_br" 00:39:38.952 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:39:38.952 01:05:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:39:38.952 Cannot find device "nvmf_tgt_br2" 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:38.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:38.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:39:38.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:38.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:39:38.952 00:39:38.952 --- 10.0.0.2 ping statistics --- 00:39:38.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.952 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:39:38.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:38.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:39:38.952 00:39:38.952 --- 10.0.0.3 ping statistics --- 00:39:38.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.952 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:39:38.952 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:39.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:39.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:39:39.212 00:39:39.212 --- 10.0.0.1 ping statistics --- 00:39:39.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.212 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@721 -- # xtrace_disable 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=106057 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 106057 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 106057 ']' 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:39.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:39.212 01:05:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:39.212 [2024-05-15 01:05:42.321928] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:39:39.212 [2024-05-15 01:05:42.322049] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:39.212 [2024-05-15 01:05:42.462853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:39.473 [2024-05-15 01:05:42.557039] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:39.473 [2024-05-15 01:05:42.557103] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:39.473 [2024-05-15 01:05:42.557116] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:39.473 [2024-05-15 01:05:42.557125] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:39.473 [2024-05-15 01:05:42.557132] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:39.473 [2024-05-15 01:05:42.557947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:39.473 [2024-05-15 01:05:42.558113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:39:39.473 [2024-05-15 01:05:42.558118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.041 01:05:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:40.041 01:05:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:39:40.041 01:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:40.041 01:05:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@727 -- # xtrace_disable 00:39:40.041 01:05:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:40.041 01:05:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:40.041 01:05:43 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:40.304 [2024-05-15 01:05:43.585051] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:40.564 01:05:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:39:40.824 Malloc0 00:39:40.824 01:05:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:41.085 01:05:44 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:41.085 01:05:44 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:41.654 [2024-05-15 01:05:44.684384] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:39:41.654 [2024-05-15 01:05:44.684698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:41.654 01:05:44 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:41.912 [2024-05-15 01:05:44.964873] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:41.912 01:05:44 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:39:42.170 [2024-05-15 01:05:45.205095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:39:42.170 01:05:45 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:39:42.171 01:05:45 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=106169 00:39:42.171 01:05:45 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:42.171 01:05:45 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 106169 /var/tmp/bdevperf.sock 00:39:42.171 01:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 106169 ']' 00:39:42.171 01:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:42.171 01:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:42.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:42.171 01:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:42.171 01:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:42.171 01:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:42.429 01:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:42.429 01:05:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:39:42.429 01:05:45 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:42.687 NVMe0n1 00:39:42.687 01:05:45 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:42.946 00:39:42.946 01:05:46 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=106203 00:39:42.946 01:05:46 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:42.946 01:05:46 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:39:44.321 01:05:47 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:44.321 [2024-05-15 01:05:47.483722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3d070 is same with the state(5) to be set 00:39:44.321 [2024-05-15 01:05:47.483788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3d070 is same with the state(5) to be set 00:39:44.321 [2024-05-15 01:05:47.483801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3d070 is same with the state(5) to be set 00:39:44.321 [2024-05-15 01:05:47.483810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3d070 is same with the state(5) to be set 00:39:44.321 [2024-05-15 01:05:47.483819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3d070 is same with the state(5) to be set 00:39:44.321 [2024-05-15 01:05:47.483828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3d070 is same with the state(5) to be set 00:39:44.321 [2024-05-15 01:05:47.483837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3d070 is same with the state(5) to be set 00:39:44.321 01:05:47 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:39:47.603 01:05:50 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:47.603 00:39:47.603 01:05:50 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:47.861 01:05:51 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:39:51.154 01:05:54 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:51.154 [2024-05-15 01:05:54.348997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:51.154 01:05:54 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:39:52.091 01:05:55 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:39:52.351 [2024-05-15 01:05:55.595241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.595706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.595800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.595870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.595931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.595992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.596981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.597973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.598970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.351 [2024-05-15 01:05:55.599870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.599927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.600943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.601007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.601078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.601673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.601750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.601818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.601879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.601927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.601987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.602046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.602106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.602166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.602242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.602308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.602380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.602455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.602519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.602580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.602671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.602754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.602829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.602903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.603949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.604014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.604089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.604154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.604212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.604278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.352 [2024-05-15 01:05:55.604349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4160 is same with the state(5) to be set 00:39:52.611 01:05:55 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 106203 00:39:59.181 0 00:39:59.181 01:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 106169 00:39:59.181 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 106169 ']' 00:39:59.181 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 106169 00:39:59.181 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:39:59.181 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:39:59.181 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 106169 00:39:59.181 killing process with pid 106169 00:39:59.181 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:39:59.181 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:39:59.181 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 106169' 00:39:59.181 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 106169 00:39:59.181 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 106169 00:39:59.181 01:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:39:59.181 [2024-05-15 01:05:45.266208] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:39:59.181 [2024-05-15 01:05:45.266314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106169 ] 00:39:59.181 [2024-05-15 01:05:45.406391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.181 [2024-05-15 01:05:45.501440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:59.181 Running I/O for 15 seconds... 00:39:59.181 [2024-05-15 01:05:47.484238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.181 [2024-05-15 01:05:47.484291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.181 [2024-05-15 01:05:47.484345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.181 [2024-05-15 01:05:47.484371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.181 [2024-05-15 01:05:47.484390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.181 [2024-05-15 01:05:47.484404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.181 [2024-05-15 01:05:47.484421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.181 [2024-05-15 01:05:47.484435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.181 [2024-05-15 01:05:47.484451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.182 [2024-05-15 01:05:47.484465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.182 [2024-05-15 01:05:47.484494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.182 [2024-05-15 01:05:47.484524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.182 [2024-05-15 01:05:47.484554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.182 [2024-05-15 01:05:47.484584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.182 [2024-05-15 01:05:47.484642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.182 [2024-05-15 01:05:47.484672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.182 [2024-05-15 01:05:47.484729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.182 [2024-05-15 01:05:47.484759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.182 [2024-05-15 01:05:47.484788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.182 [2024-05-15 01:05:47.484818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.182 [2024-05-15 01:05:47.484847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.484883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.484914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.484944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.484974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.484989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.182 [2024-05-15 01:05:47.485607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.182 [2024-05-15 01:05:47.485625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.183 [2024-05-15 01:05:47.485641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.485657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.183 [2024-05-15 01:05:47.485672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.485687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.183 [2024-05-15 01:05:47.485701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.485717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.485731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.485747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.183 [2024-05-15 01:05:47.485761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.485776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.183 [2024-05-15 01:05:47.485790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.485806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.183 [2024-05-15 01:05:47.485820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.485836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.183 [2024-05-15 01:05:47.485850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.485866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.183 [2024-05-15 01:05:47.485885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.485909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.183 [2024-05-15 01:05:47.485923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.485939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.183 [2024-05-15 01:05:47.485953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.485970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.485984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.183 [2024-05-15 01:05:47.486712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.183 [2024-05-15 01:05:47.486726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.486741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.184 [2024-05-15 01:05:47.486755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.486771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.184 [2024-05-15 01:05:47.486785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.486801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.184 [2024-05-15 01:05:47.486814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.486830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.184 [2024-05-15 01:05:47.486844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.486859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.184 [2024-05-15 01:05:47.486878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.486894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.184 [2024-05-15 01:05:47.486907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.486923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.184 [2024-05-15 01:05:47.486938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.486954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.184 [2024-05-15 01:05:47.486968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.486983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.184 [2024-05-15 01:05:47.487009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.184 [2024-05-15 01:05:47.487041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.184 [2024-05-15 01:05:47.487071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.184 [2024-05-15 01:05:47.487108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.184 [2024-05-15 01:05:47.487812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.184 [2024-05-15 01:05:47.487828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.487842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.487864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.487883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.487899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.487914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.487929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.487944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.487959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.487973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.487989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.488003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.488038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.488068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.488098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.488128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.488157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.488196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.488238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.488276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.488307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.185 [2024-05-15 01:05:47.488336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550080 is same with the state(5) to be set 00:39:59.185 [2024-05-15 01:05:47.488370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:59.185 [2024-05-15 01:05:47.488381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:59.185 [2024-05-15 01:05:47.488397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84296 len:8 PRP1 0x0 PRP2 0x0 00:39:59.185 [2024-05-15 01:05:47.488411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488470] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x550080 was disconnected and freed. reset controller. 00:39:59.185 [2024-05-15 01:05:47.488505] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:39:59.185 [2024-05-15 01:05:47.488573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:59.185 [2024-05-15 01:05:47.488607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:59.185 [2024-05-15 01:05:47.488648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:59.185 [2024-05-15 01:05:47.488697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:59.185 [2024-05-15 01:05:47.488732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:47.488746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:59.185 [2024-05-15 01:05:47.492652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:59.185 [2024-05-15 01:05:47.492693] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5313c0 (9): Bad file descriptor 00:39:59.185 [2024-05-15 01:05:47.533915] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:59.185 [2024-05-15 01:05:51.081190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:59.185 [2024-05-15 01:05:51.081260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:51.081281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:59.185 [2024-05-15 01:05:51.081325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:51.081341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:59.185 [2024-05-15 01:05:51.081355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:51.081369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:59.185 [2024-05-15 01:05:51.081383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:51.081396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5313c0 is same with the state(5) to be set 00:39:59.185 [2024-05-15 01:05:51.081529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.185 [2024-05-15 01:05:51.081552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:51.081576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.185 [2024-05-15 01:05:51.081591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:51.081625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.185 [2024-05-15 01:05:51.081640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:51.081656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.185 [2024-05-15 01:05:51.081670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:51.081686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.185 [2024-05-15 01:05:51.081700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:51.081716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.185 [2024-05-15 01:05:51.081730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.185 [2024-05-15 01:05:51.081745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.185 [2024-05-15 01:05:51.081759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.081775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.081788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.081804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.081819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.081834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.081848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.081874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.081889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.081905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.081919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.081934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.081949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.081964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.081978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.081993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.186 [2024-05-15 01:05:51.082568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.186 [2024-05-15 01:05:51.082582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.082607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.082623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.082646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.082661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.082676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.082690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.082706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.082720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.082736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.082750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.082765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.082780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.082796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.082811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.082826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.082840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.082856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.082870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.082886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.082900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.082916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.082929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.082945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.082959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.082975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.082999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.083039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.083069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.083099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.083129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.083158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.083188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.187 [2024-05-15 01:05:51.083218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.187 [2024-05-15 01:05:51.083700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.187 [2024-05-15 01:05:51.083716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.083732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.083748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.083761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.083777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.083791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.083807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.083821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.083844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.083858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.083874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.083888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.083904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.083918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.083933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.083947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.083963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.083977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.083992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.188 [2024-05-15 01:05:51.084126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.188 [2024-05-15 01:05:51.084155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.188 [2024-05-15 01:05:51.084186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.188 [2024-05-15 01:05:51.084221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.188 [2024-05-15 01:05:51.084252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.188 [2024-05-15 01:05:51.084282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.188 [2024-05-15 01:05:51.084312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.188 [2024-05-15 01:05:51.084342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.188 [2024-05-15 01:05:51.084809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.188 [2024-05-15 01:05:51.084824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.189 [2024-05-15 01:05:51.084838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.084854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.189 [2024-05-15 01:05:51.084876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.084893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.189 [2024-05-15 01:05:51.084907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.084923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.189 [2024-05-15 01:05:51.084937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.084952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.189 [2024-05-15 01:05:51.084966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.084981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.189 [2024-05-15 01:05:51.085002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.189 [2024-05-15 01:05:51.085033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:51.085491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x551e50 is same with the state(5) to be set 00:39:59.189 [2024-05-15 01:05:51.085523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:59.189 [2024-05-15 01:05:51.085534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:59.189 [2024-05-15 01:05:51.085545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89464 len:8 PRP1 0x0 PRP2 0x0 00:39:59.189 [2024-05-15 01:05:51.085559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:51.085628] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x551e50 was disconnected and freed. reset controller. 00:39:59.189 [2024-05-15 01:05:51.085647] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:39:59.189 [2024-05-15 01:05:51.085662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:59.189 [2024-05-15 01:05:51.089517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:59.189 [2024-05-15 01:05:51.089556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5313c0 (9): Bad file descriptor 00:39:59.189 [2024-05-15 01:05:51.123245] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:59.189 [2024-05-15 01:05:55.601053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:59.189 [2024-05-15 01:05:55.601096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:55.601115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:59.189 [2024-05-15 01:05:55.601130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:55.601144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:59.189 [2024-05-15 01:05:55.601158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:55.601173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:59.189 [2024-05-15 01:05:55.601187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:55.601201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5313c0 is same with the state(5) to be set 00:39:59.189 [2024-05-15 01:05:55.604518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:55.604551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:55.604587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:55.604614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:55.604632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:55.604646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:55.604662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:55.604677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:55.604693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.189 [2024-05-15 01:05:55.604707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.189 [2024-05-15 01:05:55.604723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.604737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.604753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.604767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.604783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.604797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.604813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.604827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.604842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.604857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.604872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.604887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.604902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.604916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.604932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.604958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.604975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.604989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.190 [2024-05-15 01:05:55.605835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.190 [2024-05-15 01:05:55.605849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.605865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.605880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.605896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.605910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.605926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.605940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.605956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.605970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.605986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.191 [2024-05-15 01:05:55.606493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.191 [2024-05-15 01:05:55.606523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.191 [2024-05-15 01:05:55.606560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.191 [2024-05-15 01:05:55.606590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.191 [2024-05-15 01:05:55.606632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.191 [2024-05-15 01:05:55.606662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.191 [2024-05-15 01:05:55.606692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.191 [2024-05-15 01:05:55.606722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.191 [2024-05-15 01:05:55.606751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.191 [2024-05-15 01:05:55.606767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.606781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.606797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.606811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.606826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.606840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.606856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.606871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.606886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.606901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.606916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.606937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.606953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.606967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.606982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.607007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.607039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.607069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.607098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.607129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.607159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.607189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.607219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.607249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.607279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.607309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.607347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:59.192 [2024-05-15 01:05:55.607377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.192 [2024-05-15 01:05:55.607895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.192 [2024-05-15 01:05:55.607911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.607925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.607941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.607955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.607970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.607985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:59.193 [2024-05-15 01:05:55.608552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x553fc0 is same with the state(5) to be set 00:39:59.193 [2024-05-15 01:05:55.608584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:59.193 [2024-05-15 01:05:55.608604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:59.193 [2024-05-15 01:05:55.608617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:8 PRP1 0x0 PRP2 0x0 00:39:59.193 [2024-05-15 01:05:55.608631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:59.193 [2024-05-15 01:05:55.608697] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x553fc0 was disconnected and freed. reset controller. 00:39:59.193 [2024-05-15 01:05:55.608716] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:39:59.193 [2024-05-15 01:05:55.608730] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:59.193 [2024-05-15 01:05:55.612678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:59.193 [2024-05-15 01:05:55.612722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5313c0 (9): Bad file descriptor 00:39:59.193 [2024-05-15 01:05:55.645322] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:59.193 00:39:59.193 Latency(us) 00:39:59.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.193 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:59.193 Verification LBA range: start 0x0 length 0x4000 00:39:59.193 NVMe0n1 : 15.01 8947.12 34.95 218.91 0.00 13931.67 610.68 23235.49 00:39:59.193 =================================================================================================================== 00:39:59.193 Total : 8947.12 34.95 218.91 0.00 13931.67 610.68 23235.49 00:39:59.193 Received shutdown signal, test time was about 15.000000 seconds 00:39:59.193 00:39:59.193 Latency(us) 00:39:59.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.193 =================================================================================================================== 00:39:59.193 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:59.193 01:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:39:59.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:59.193 01:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:39:59.193 01:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:39:59.193 01:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=106400 00:39:59.193 01:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 106400 /var/tmp/bdevperf.sock 00:39:59.193 01:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:39:59.193 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 106400 ']' 00:39:59.193 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:59.193 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:59.193 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:59.193 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:59.193 01:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:59.452 01:06:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:59.452 01:06:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:39:59.452 01:06:02 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:59.710 [2024-05-15 01:06:02.877772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:59.710 01:06:02 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:39:59.969 [2024-05-15 01:06:03.137982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:39:59.969 01:06:03 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:00.228 NVMe0n1 00:40:00.228 01:06:03 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:00.486 00:40:00.486 01:06:03 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:00.745 00:40:01.003 01:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:01.003 01:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:40:01.260 01:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:01.518 01:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:40:04.796 01:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:04.796 01:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:40:04.796 01:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=106538 00:40:04.796 01:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:04.796 01:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 106538 00:40:05.730 0 00:40:05.730 01:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:40:05.730 [2024-05-15 01:06:01.649780] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:40:05.731 [2024-05-15 01:06:01.649897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106400 ] 00:40:05.731 [2024-05-15 01:06:01.792394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:05.731 [2024-05-15 01:06:01.887840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:05.731 [2024-05-15 01:06:04.579186] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:40:05.731 [2024-05-15 01:06:04.579300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:05.731 [2024-05-15 01:06:04.579324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.731 [2024-05-15 01:06:04.579343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:05.731 [2024-05-15 01:06:04.579357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.731 [2024-05-15 01:06:04.579371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:05.731 [2024-05-15 01:06:04.579384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.731 [2024-05-15 01:06:04.579398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:05.731 [2024-05-15 01:06:04.579411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:05.731 [2024-05-15 01:06:04.579425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:05.731 [2024-05-15 01:06:04.579478] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:05.731 [2024-05-15 01:06:04.579511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1d3c0 (9): Bad file descriptor 00:40:05.731 [2024-05-15 01:06:04.589719] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:05.731 Running I/O for 1 seconds... 00:40:05.731 00:40:05.731 Latency(us) 00:40:05.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:05.731 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:05.731 Verification LBA range: start 0x0 length 0x4000 00:40:05.731 NVMe0n1 : 1.01 9005.15 35.18 0.00 0.00 14139.53 2278.87 15490.33 00:40:05.731 =================================================================================================================== 00:40:05.731 Total : 9005.15 35.18 0.00 0.00 14139.53 2278.87 15490.33 00:40:05.731 01:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:05.731 01:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:40:06.298 01:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:06.298 01:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:06.298 01:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:40:06.556 01:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:06.839 01:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:40:10.138 01:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:10.138 01:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:40:10.138 01:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 106400 00:40:10.138 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 106400 ']' 00:40:10.138 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 106400 00:40:10.138 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:40:10.138 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:40:10.138 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 106400 00:40:10.138 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:40:10.138 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:40:10.138 killing process with pid 106400 00:40:10.138 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 106400' 00:40:10.138 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 106400 00:40:10.138 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 106400 00:40:10.394 01:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:40:10.394 01:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:10.651 01:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:40:10.651 01:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:40:10.651 01:06:13 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:40:10.651 01:06:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:10.651 01:06:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:40:10.651 01:06:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:10.651 01:06:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:40:10.651 01:06:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:10.651 01:06:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:10.651 rmmod nvme_tcp 00:40:10.651 rmmod nvme_fabrics 00:40:10.910 rmmod nvme_keyring 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 106057 ']' 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 106057 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 106057 ']' 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 106057 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 106057 00:40:10.910 killing process with pid 106057 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 106057' 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 106057 00:40:10.910 [2024-05-15 01:06:13.988714] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:40:10.910 01:06:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 106057 00:40:11.167 01:06:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:11.167 01:06:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:11.167 01:06:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:11.167 01:06:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:11.167 01:06:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:11.167 01:06:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:11.167 01:06:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:11.167 01:06:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:11.167 01:06:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:40:11.167 ************************************ 00:40:11.167 END TEST nvmf_failover 00:40:11.167 ************************************ 00:40:11.167 00:40:11.167 real 0m32.434s 00:40:11.167 user 2m6.342s 00:40:11.167 sys 0m4.660s 00:40:11.167 01:06:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # xtrace_disable 00:40:11.167 01:06:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:40:11.167 01:06:14 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:40:11.167 01:06:14 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:40:11.167 01:06:14 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:40:11.167 01:06:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:11.167 ************************************ 00:40:11.167 START TEST nvmf_host_discovery 00:40:11.167 ************************************ 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:40:11.167 * Looking for test storage... 00:40:11.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:11.167 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:40:11.168 Cannot find device "nvmf_tgt_br" 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:40:11.168 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:40:11.426 Cannot find device "nvmf_tgt_br2" 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:40:11.426 Cannot find device "nvmf_tgt_br" 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:40:11.426 Cannot find device "nvmf_tgt_br2" 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:11.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:11.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:40:11.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:11.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:40:11.426 00:40:11.426 --- 10.0.0.2 ping statistics --- 00:40:11.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:11.426 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:40:11.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:11.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:40:11.426 00:40:11.426 --- 10.0.0.3 ping statistics --- 00:40:11.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:11.426 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:40:11.426 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:11.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:11.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:40:11.685 00:40:11.685 --- 10.0.0.1 ping statistics --- 00:40:11.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:11.685 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=106841 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 106841 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 106841 ']' 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:11.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:40:11.685 01:06:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:11.685 [2024-05-15 01:06:14.795440] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:40:11.685 [2024-05-15 01:06:14.795526] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:11.685 [2024-05-15 01:06:14.928061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.943 [2024-05-15 01:06:15.027253] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:11.943 [2024-05-15 01:06:15.027511] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:11.943 [2024-05-15 01:06:15.027612] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:11.943 [2024-05-15 01:06:15.027700] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:11.943 [2024-05-15 01:06:15.027779] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:11.943 [2024-05-15 01:06:15.027884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:12.537 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:40:12.537 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:40:12.537 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:12.537 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:40:12.537 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:12.537 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:12.537 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:12.537 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:12.537 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:12.810 [2024-05-15 01:06:15.806349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:12.810 [2024-05-15 01:06:15.814271] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:40:12.810 [2024-05-15 01:06:15.814628] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:12.810 null0 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:12.810 null1 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=106887 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 106887 /tmp/host.sock 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 106887 ']' 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:40:12.810 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:40:12.810 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:40:12.811 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:40:12.811 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:12.811 [2024-05-15 01:06:15.890422] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:40:12.811 [2024-05-15 01:06:15.890495] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106887 ] 00:40:12.811 [2024-05-15 01:06:16.023713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:13.069 [2024-05-15 01:06:16.120779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:13.637 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.897 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:40:13.897 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:40:13.897 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:13.897 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:13.897 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:13.897 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.897 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:13.897 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.897 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.897 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.897 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.898 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:40:13.898 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:13.898 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:13.898 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.898 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.898 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:13.898 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:13.898 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.898 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.157 [2024-05-15 01:06:17.246868] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.157 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == \n\v\m\e\0 ]] 00:40:14.416 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:40:14.675 [2024-05-15 01:06:17.899868] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:40:14.675 [2024-05-15 01:06:17.899915] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:40:14.675 [2024-05-15 01:06:17.899939] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:40:14.933 [2024-05-15 01:06:17.986031] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:40:14.933 [2024-05-15 01:06:18.042204] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:40:14.933 [2024-05-15 01:06:18.042253] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:15.500 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0 ]] 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:15.501 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.760 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:15.760 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:15.760 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:40:15.760 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:40:15.760 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:15.760 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:15.760 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:15.760 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.761 [2024-05-15 01:06:18.852112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:15.761 [2024-05-15 01:06:18.852879] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:40:15.761 [2024-05-15 01:06:18.852919] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.761 [2024-05-15 01:06:18.938948] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:40:15.761 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.761 [2024-05-15 01:06:19.002258] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:40:15.761 [2024-05-15 01:06:19.002306] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:40:15.761 [2024-05-15 01:06:19.002315] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:40:15.761 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:40:15.761 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.140 [2024-05-15 01:06:20.153167] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:40:17.140 [2024-05-15 01:06:20.153204] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:40:17.140 [2024-05-15 01:06:20.159565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:17.140 [2024-05-15 01:06:20.159743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.140 [2024-05-15 01:06:20.159763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:17.140 [2024-05-15 01:06:20.159774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.140 [2024-05-15 01:06:20.159785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:17.140 [2024-05-15 01:06:20.159794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.140 [2024-05-15 01:06:20.159804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:17.140 [2024-05-15 01:06:20.159813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.140 [2024-05-15 01:06:20.159823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18088a0 is same with the state(5) to be set 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.140 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:17.141 [2024-05-15 01:06:20.169522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18088a0 (9): Bad file descriptor 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.141 [2024-05-15 01:06:20.179543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:17.141 [2024-05-15 01:06:20.179697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:17.141 [2024-05-15 01:06:20.179752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:17.141 [2024-05-15 01:06:20.179769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18088a0 with addr=10.0.0.2, port=4420 00:40:17.141 [2024-05-15 01:06:20.179781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18088a0 is same with the state(5) to be set 00:40:17.141 [2024-05-15 01:06:20.179800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18088a0 (9): Bad file descriptor 00:40:17.141 [2024-05-15 01:06:20.179816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:17.141 [2024-05-15 01:06:20.179825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:17.141 [2024-05-15 01:06:20.179835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:17.141 [2024-05-15 01:06:20.179851] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:17.141 [2024-05-15 01:06:20.189617] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:17.141 [2024-05-15 01:06:20.189726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:17.141 [2024-05-15 01:06:20.189773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:17.141 [2024-05-15 01:06:20.189789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18088a0 with addr=10.0.0.2, port=4420 00:40:17.141 [2024-05-15 01:06:20.189800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18088a0 is same with the state(5) to be set 00:40:17.141 [2024-05-15 01:06:20.189816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18088a0 (9): Bad file descriptor 00:40:17.141 [2024-05-15 01:06:20.189831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:17.141 [2024-05-15 01:06:20.189841] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:17.141 [2024-05-15 01:06:20.189850] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:17.141 [2024-05-15 01:06:20.189865] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:17.141 [2024-05-15 01:06:20.199684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:17.141 [2024-05-15 01:06:20.199786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:17.141 [2024-05-15 01:06:20.199832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:17.141 [2024-05-15 01:06:20.199849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18088a0 with addr=10.0.0.2, port=4420 00:40:17.141 [2024-05-15 01:06:20.199859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18088a0 is same with the state(5) to be set 00:40:17.141 [2024-05-15 01:06:20.199876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18088a0 (9): Bad file descriptor 00:40:17.141 [2024-05-15 01:06:20.199891] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:17.141 [2024-05-15 01:06:20.199899] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:17.141 [2024-05-15 01:06:20.199908] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:17.141 [2024-05-15 01:06:20.199923] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:17.141 [2024-05-15 01:06:20.209744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:17.141 [2024-05-15 01:06:20.209833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:17.141 [2024-05-15 01:06:20.209880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:17.141 [2024-05-15 01:06:20.209896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18088a0 with addr=10.0.0.2, port=4420 00:40:17.141 [2024-05-15 01:06:20.209907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18088a0 is same with the state(5) to be set 00:40:17.141 [2024-05-15 01:06:20.209923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18088a0 (9): Bad file descriptor 00:40:17.141 [2024-05-15 01:06:20.209938] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:17.141 [2024-05-15 01:06:20.209946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:17.141 [2024-05-15 01:06:20.209956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:17.141 [2024-05-15 01:06:20.209970] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:17.141 [2024-05-15 01:06:20.219800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:17.141 [2024-05-15 01:06:20.219869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:17.141 [2024-05-15 01:06:20.219915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:17.141 [2024-05-15 01:06:20.219932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18088a0 with addr=10.0.0.2, port=4420 00:40:17.141 [2024-05-15 01:06:20.219942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18088a0 is same with the state(5) to be set 00:40:17.141 [2024-05-15 01:06:20.219957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18088a0 (9): Bad file descriptor 00:40:17.141 [2024-05-15 01:06:20.219972] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:17.141 [2024-05-15 01:06:20.219981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:17.141 [2024-05-15 01:06:20.219990] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:17.141 [2024-05-15 01:06:20.220005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.141 [2024-05-15 01:06:20.229846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:17.141 [2024-05-15 01:06:20.229955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:17.141 [2024-05-15 01:06:20.230003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:17.141 [2024-05-15 01:06:20.230019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18088a0 with addr=10.0.0.2, port=4420 00:40:17.141 [2024-05-15 01:06:20.230030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18088a0 is same with the state(5) to be set 00:40:17.141 [2024-05-15 01:06:20.230047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18088a0 (9): Bad file descriptor 00:40:17.141 [2024-05-15 01:06:20.230062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:17.141 [2024-05-15 01:06:20.230071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:17.141 [2024-05-15 01:06:20.230080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:17.141 [2024-05-15 01:06:20.230095] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:17.141 [2024-05-15 01:06:20.239230] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:40:17.141 [2024-05-15 01:06:20.239271] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:17.141 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4421 == \4\4\2\1 ]] 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:17.142 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.401 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.336 [2024-05-15 01:06:21.580313] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:40:18.336 [2024-05-15 01:06:21.580357] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:40:18.336 [2024-05-15 01:06:21.580375] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:40:18.594 [2024-05-15 01:06:21.666429] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:40:18.594 [2024-05-15 01:06:21.725951] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:40:18.594 [2024-05-15 01:06:21.726026] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.594 2024/05/15 01:06:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:40:18.594 request: 00:40:18.594 { 00:40:18.594 "method": "bdev_nvme_start_discovery", 00:40:18.594 "params": { 00:40:18.594 "name": "nvme", 00:40:18.594 "trtype": "tcp", 00:40:18.594 "traddr": "10.0.0.2", 00:40:18.594 "hostnqn": "nqn.2021-12.io.spdk:test", 00:40:18.594 "adrfam": "ipv4", 00:40:18.594 "trsvcid": "8009", 00:40:18.594 "wait_for_attach": true 00:40:18.594 } 00:40:18.594 } 00:40:18.594 Got JSON-RPC error response 00:40:18.594 GoRPCClient: error on JSON-RPC call 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:18.594 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.595 2024/05/15 01:06:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:40:18.595 request: 00:40:18.595 { 00:40:18.595 "method": "bdev_nvme_start_discovery", 00:40:18.595 "params": { 00:40:18.595 "name": "nvme_second", 00:40:18.595 "trtype": "tcp", 00:40:18.595 "traddr": "10.0.0.2", 00:40:18.595 "hostnqn": "nqn.2021-12.io.spdk:test", 00:40:18.595 "adrfam": "ipv4", 00:40:18.595 "trsvcid": "8009", 00:40:18.595 "wait_for_attach": true 00:40:18.595 } 00:40:18.595 } 00:40:18.595 Got JSON-RPC error response 00:40:18.595 GoRPCClient: error on JSON-RPC call 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:40:18.595 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:18.853 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:40:18.853 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:40:18.853 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:18.853 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:18.853 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.853 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.854 01:06:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:19.790 [2024-05-15 01:06:22.979814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:19.790 [2024-05-15 01:06:22.979919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:19.790 [2024-05-15 01:06:22.979939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x180c100 with addr=10.0.0.2, port=8010 00:40:19.790 [2024-05-15 01:06:22.979962] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:40:19.790 [2024-05-15 01:06:22.979973] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:19.790 [2024-05-15 01:06:22.979983] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:40:20.724 [2024-05-15 01:06:23.979760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:20.724 [2024-05-15 01:06:23.979846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:20.724 [2024-05-15 01:06:23.979865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x180c100 with addr=10.0.0.2, port=8010 00:40:20.724 [2024-05-15 01:06:23.979888] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:40:20.724 [2024-05-15 01:06:23.979898] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:20.724 [2024-05-15 01:06:23.979908] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:40:21.704 [2024-05-15 01:06:24.979632] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:40:21.704 2024/05/15 01:06:24 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:40:21.704 request: 00:40:21.704 { 00:40:21.704 "method": "bdev_nvme_start_discovery", 00:40:21.704 "params": { 00:40:21.704 "name": "nvme_second", 00:40:21.704 "trtype": "tcp", 00:40:21.704 "traddr": "10.0.0.2", 00:40:21.704 "hostnqn": "nqn.2021-12.io.spdk:test", 00:40:21.704 "adrfam": "ipv4", 00:40:21.704 "trsvcid": "8010", 00:40:21.704 "attach_timeout_ms": 3000 00:40:21.704 } 00:40:21.704 } 00:40:21.704 Got JSON-RPC error response 00:40:21.704 GoRPCClient: error on JSON-RPC call 00:40:21.704 01:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:40:21.704 01:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:40:21.704 01:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:21.704 01:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:21.704 01:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:21.704 01:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:40:21.704 01:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:40:21.704 01:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:40:21.704 01:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:21.704 01:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:40:21.704 01:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:21.704 01:06:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 106887 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:21.963 rmmod nvme_tcp 00:40:21.963 rmmod nvme_fabrics 00:40:21.963 rmmod nvme_keyring 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 106841 ']' 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 106841 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' -z 106841 ']' 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # kill -0 106841 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # uname 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 106841 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:40:21.963 killing process with pid 106841 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 106841' 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # kill 106841 00:40:21.963 [2024-05-15 01:06:25.177724] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:40:21.963 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@971 -- # wait 106841 00:40:22.222 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:22.222 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:22.222 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:22.222 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:22.222 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:22.222 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:22.222 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:22.222 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:22.222 01:06:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:40:22.222 00:40:22.222 real 0m11.110s 00:40:22.222 user 0m21.994s 00:40:22.222 sys 0m1.607s 00:40:22.222 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:40:22.222 01:06:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:22.222 ************************************ 00:40:22.222 END TEST nvmf_host_discovery 00:40:22.222 ************************************ 00:40:22.222 01:06:25 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:40:22.222 01:06:25 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:40:22.222 01:06:25 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:40:22.222 01:06:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:22.222 ************************************ 00:40:22.222 START TEST nvmf_host_multipath_status 00:40:22.222 ************************************ 00:40:22.222 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:40:22.481 * Looking for test storage... 00:40:22.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.481 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:40:22.482 Cannot find device "nvmf_tgt_br" 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:40:22.482 Cannot find device "nvmf_tgt_br2" 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:40:22.482 Cannot find device "nvmf_tgt_br" 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:40:22.482 Cannot find device "nvmf_tgt_br2" 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:22.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:22.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:22.482 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:40:22.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:22.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:40:22.742 00:40:22.742 --- 10.0.0.2 ping statistics --- 00:40:22.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.742 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:40:22.742 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:22.742 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:40:22.742 00:40:22.742 --- 10.0.0.3 ping statistics --- 00:40:22.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.742 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:22.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:22.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:40:22.742 00:40:22.742 --- 10.0.0.1 ping statistics --- 00:40:22.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.742 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@721 -- # xtrace_disable 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=107372 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 107372 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 107372 ']' 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:40:22.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:40:22.742 01:06:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:22.742 [2024-05-15 01:06:25.985935] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:40:22.742 [2024-05-15 01:06:25.986043] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:23.001 [2024-05-15 01:06:26.129015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:23.001 [2024-05-15 01:06:26.225316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:23.001 [2024-05-15 01:06:26.225367] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:23.001 [2024-05-15 01:06:26.225381] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:23.001 [2024-05-15 01:06:26.225391] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:23.001 [2024-05-15 01:06:26.225400] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:23.001 [2024-05-15 01:06:26.225548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:23.001 [2024-05-15 01:06:26.225741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.978 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:40:23.978 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:40:23.978 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:23.978 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@727 -- # xtrace_disable 00:40:23.978 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:23.978 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:23.978 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=107372 00:40:23.978 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:23.978 [2024-05-15 01:06:27.218408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:24.292 01:06:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:40:24.292 Malloc0 00:40:24.292 01:06:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:40:24.551 01:06:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:24.809 01:06:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:25.067 [2024-05-15 01:06:28.266248] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:40:25.067 [2024-05-15 01:06:28.266519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:25.068 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:25.326 [2024-05-15 01:06:28.566635] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:25.326 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=107478 00:40:25.326 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:40:25.326 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:25.326 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 107478 /var/tmp/bdevperf.sock 00:40:25.326 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 107478 ']' 00:40:25.326 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:25.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:25.326 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:40:25.326 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:25.326 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:40:25.326 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:26.759 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:40:26.759 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:40:26.759 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:40:26.759 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:40:27.018 Nvme0n1 00:40:27.018 01:06:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:40:27.277 Nvme0n1 00:40:27.277 01:06:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:40:27.277 01:06:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:40:29.810 01:06:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:40:29.810 01:06:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:40:29.810 01:06:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:29.810 01:06:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:40:31.183 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:40:31.183 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:31.183 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:31.183 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:31.183 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:31.183 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:31.183 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:31.183 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:31.441 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:31.441 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:31.441 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:31.441 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:31.699 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:31.699 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:31.699 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:31.699 01:06:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:31.958 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:31.958 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:31.958 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:31.958 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:32.217 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:32.217 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:32.217 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:32.217 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:32.483 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:32.483 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:40:32.483 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:32.742 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:33.001 01:06:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:40:33.935 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:40:33.935 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:33.935 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:33.935 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:34.194 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:34.194 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:34.194 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:34.194 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:34.452 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:34.452 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:34.452 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:34.452 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:34.709 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:34.709 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:34.709 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:34.709 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:34.966 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:34.966 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:34.966 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:34.966 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:35.222 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:35.223 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:35.223 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:35.223 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:35.546 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:35.546 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:40:35.546 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:35.825 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:40:36.082 01:06:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:40:37.015 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:40:37.015 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:37.015 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:37.015 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:37.272 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:37.272 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:37.272 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:37.272 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:37.531 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:37.531 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:37.531 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:37.531 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:37.789 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:37.789 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:37.789 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:37.789 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:38.048 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:38.048 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:38.048 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:38.048 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:38.306 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:38.306 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:38.306 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:38.306 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:38.872 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:38.872 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:40:38.872 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:38.872 01:06:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:40:39.131 01:06:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:40:40.508 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:40:40.508 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:40.508 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:40.508 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:40.508 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:40.508 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:40.508 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:40.508 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:40.766 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:40.766 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:40.766 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:40.766 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:41.024 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:41.024 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:41.024 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:41.024 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:41.282 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:41.282 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:41.282 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:41.282 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:41.540 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:41.540 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:41.540 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:41.540 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:41.798 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:41.798 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:40:41.798 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:40:42.057 01:06:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:40:42.315 01:06:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:40:43.250 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:40:43.250 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:43.250 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:43.250 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:43.510 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:43.510 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:43.510 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:43.510 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:43.768 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:43.768 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:43.768 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:43.768 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:44.027 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:44.027 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:44.027 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:44.027 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:44.285 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:44.285 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:40:44.285 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:44.285 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:44.544 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:44.544 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:44.544 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:44.544 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:44.803 01:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:44.803 01:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:40:44.803 01:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:40:45.061 01:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:45.320 01:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:40:46.311 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:40:46.311 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:46.311 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:46.311 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:46.569 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:46.569 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:46.569 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:46.569 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:46.827 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:46.827 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:46.827 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:46.827 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:47.086 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:47.086 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:47.086 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:47.086 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:47.344 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:47.345 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:40:47.345 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:47.345 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:47.603 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:47.603 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:47.603 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:47.603 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:47.861 01:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:47.861 01:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:40:48.119 01:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:40:48.119 01:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:40:48.377 01:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:48.636 01:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:40:49.572 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:40:49.572 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:49.572 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:49.572 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:49.830 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:49.830 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:49.830 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:49.830 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:50.088 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:50.088 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:50.088 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:50.088 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:50.352 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:50.352 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:50.352 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:50.352 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:50.613 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:50.613 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:50.613 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:50.613 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:50.872 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:50.872 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:50.872 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:50.872 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:51.130 01:06:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:51.130 01:06:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:40:51.130 01:06:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:51.389 01:06:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:51.646 01:06:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:40:52.580 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:40:52.580 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:52.580 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:52.580 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:52.838 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:52.838 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:52.838 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:52.838 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:53.096 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:53.096 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:53.096 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:53.096 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:53.660 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:53.660 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:53.660 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:53.660 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:53.660 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:53.660 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:53.660 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:53.660 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:53.917 01:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:53.917 01:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:53.917 01:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:53.917 01:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:54.176 01:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:54.176 01:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:40:54.176 01:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:54.434 01:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:40:54.691 01:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:40:55.688 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:40:55.688 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:55.688 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:55.688 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:55.946 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:55.946 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:55.946 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:55.946 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:56.203 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:56.203 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:56.203 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:56.203 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:56.461 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:56.461 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:56.461 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:56.461 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:56.719 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:56.719 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:56.719 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:56.719 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:56.977 01:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:56.977 01:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:56.977 01:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:56.977 01:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:57.256 01:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:57.256 01:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:40:57.256 01:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:57.513 01:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:40:57.770 01:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:40:58.703 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:40:58.703 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:58.703 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:58.703 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:58.962 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:58.962 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:58.962 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:58.962 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:59.527 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:59.527 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:59.527 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:59.527 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:59.527 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:59.527 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:59.527 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:59.528 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:59.786 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:59.786 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:59.786 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:59.786 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:41:00.045 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:00.045 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:41:00.045 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:00.045 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:41:00.303 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:41:00.303 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 107478 00:41:00.303 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 107478 ']' 00:41:00.303 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 107478 00:41:00.303 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:41:00.303 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:41:00.303 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 107478 00:41:00.303 killing process with pid 107478 00:41:00.303 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:41:00.303 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:41:00.303 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 107478' 00:41:00.303 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 107478 00:41:00.303 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 107478 00:41:00.561 Connection closed with partial response: 00:41:00.561 00:41:00.561 00:41:00.821 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 107478 00:41:00.821 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:41:00.821 [2024-05-15 01:06:28.626715] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:41:00.822 [2024-05-15 01:06:28.626813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107478 ] 00:41:00.822 [2024-05-15 01:06:28.765273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:00.822 [2024-05-15 01:06:28.859099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:41:00.822 Running I/O for 90 seconds... 00:41:00.822 [2024-05-15 01:06:45.203922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.822 [2024-05-15 01:06:45.204799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.204847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.204896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.204944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.204976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.204996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.822 [2024-05-15 01:06:45.205744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:00.822 [2024-05-15 01:06:45.205771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.823 [2024-05-15 01:06:45.205793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.205819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.823 [2024-05-15 01:06:45.205840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.205880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.823 [2024-05-15 01:06:45.205902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.205932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.823 [2024-05-15 01:06:45.205953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.205981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.823 [2024-05-15 01:06:45.206001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.206028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.823 [2024-05-15 01:06:45.206049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.206076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.823 [2024-05-15 01:06:45.206097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.206123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.823 [2024-05-15 01:06:45.206144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.206172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.823 [2024-05-15 01:06:45.206192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.208246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.823 [2024-05-15 01:06:45.208284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.208324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.208348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.208382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.208404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.208438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.208459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.208492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.208513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.208545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.208581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.208658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.208690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.208723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.208745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.208780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.208801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.208833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.208855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.208887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.208908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.208940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.208961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.208994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.209016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.209049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.209069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.209102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.209124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.209157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.209178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.209212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.209233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.209266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.209299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.209334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.209367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.209400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.209421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.209454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.209475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.209507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.209528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.209560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.209581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.209637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.823 [2024-05-15 01:06:45.209661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:00.823 [2024-05-15 01:06:45.209695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.824 [2024-05-15 01:06:45.209716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.209823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.824 [2024-05-15 01:06:45.209851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.209889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.209912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.209947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.209968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.210002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.210023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.210072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.210094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.210142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.210173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.210208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.210229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.210263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.210284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.210318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.210339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.210373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.210393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.210427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.210448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.210481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.210502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.210536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.210556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.210590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.210628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.210665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.210687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:06:45.210721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:06:45.210742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.943007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:07:00.943120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.943240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:07:00.943267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.943297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:07:00.943317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.943344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:07:00.943365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.943392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:07:00.943412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.945143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.824 [2024-05-15 01:07:00.945186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.945221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.824 [2024-05-15 01:07:00.945244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.945272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.824 [2024-05-15 01:07:00.945291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.945318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.824 [2024-05-15 01:07:00.945339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.945365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.824 [2024-05-15 01:07:00.945385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.945412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.824 [2024-05-15 01:07:00.945443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.945470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:07:00.945489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.945516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:07:00.945536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.945562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:07:00.945616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.945650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:07:00.945671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.945698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:07:00.945718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.945745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:07:00.945766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:00.824 [2024-05-15 01:07:00.945793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.824 [2024-05-15 01:07:00.945813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.945840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.825 [2024-05-15 01:07:00.945860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.946763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.946796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.946826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.946847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.946874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.946895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.946922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.946943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.946970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.947006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.947034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.947055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.947082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.947121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.947151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.825 [2024-05-15 01:07:00.947173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.947200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.825 [2024-05-15 01:07:00.947220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.947247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:00.825 [2024-05-15 01:07:00.947268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.947295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.947315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.947352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.947372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.947399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.947419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.947446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.947466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.947493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.947513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.947539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.947561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.947587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.947629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:00.825 [2024-05-15 01:07:00.947659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:00.825 [2024-05-15 01:07:00.947681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:00.825 Received shutdown signal, test time was about 32.956449 seconds 00:41:00.825 00:41:00.825 Latency(us) 00:41:00.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:00.825 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:41:00.825 Verification LBA range: start 0x0 length 0x4000 00:41:00.825 Nvme0n1 : 32.96 8465.78 33.07 0.00 0.00 15089.17 143.36 4026531.84 00:41:00.825 =================================================================================================================== 00:41:00.825 Total : 8465.78 33.07 0.00 0.00 15089.17 143.36 4026531.84 00:41:00.825 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:01.084 rmmod nvme_tcp 00:41:01.084 rmmod nvme_fabrics 00:41:01.084 rmmod nvme_keyring 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 107372 ']' 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 107372 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 107372 ']' 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 107372 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 107372 00:41:01.084 killing process with pid 107372 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 107372' 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 107372 00:41:01.084 [2024-05-15 01:07:04.241165] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:41:01.084 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 107372 00:41:01.342 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:01.342 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:01.342 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:01.342 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:01.342 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:01.342 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:01.342 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:01.342 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:01.342 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:41:01.342 ************************************ 00:41:01.342 END TEST nvmf_host_multipath_status 00:41:01.342 ************************************ 00:41:01.342 00:41:01.342 real 0m39.045s 00:41:01.342 user 2m6.523s 00:41:01.342 sys 0m10.131s 00:41:01.342 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # xtrace_disable 00:41:01.342 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:41:01.342 01:07:04 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:41:01.342 01:07:04 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:41:01.342 01:07:04 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:41:01.342 01:07:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:01.342 ************************************ 00:41:01.342 START TEST nvmf_discovery_remove_ifc 00:41:01.342 ************************************ 00:41:01.342 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:41:01.601 * Looking for test storage... 00:41:01.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:41:01.601 Cannot find device "nvmf_tgt_br" 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:41:01.601 Cannot find device "nvmf_tgt_br2" 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:41:01.601 Cannot find device "nvmf_tgt_br" 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:41:01.601 Cannot find device "nvmf_tgt_br2" 00:41:01.601 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:01.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:01.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:41:01.602 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:41:01.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:01.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:41:01.860 00:41:01.860 --- 10.0.0.2 ping statistics --- 00:41:01.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.860 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:41:01.860 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:01.860 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:41:01.860 00:41:01.860 --- 10.0.0.3 ping statistics --- 00:41:01.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.860 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:01.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:01.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:41:01.860 00:41:01.860 --- 10.0.0.1 ping statistics --- 00:41:01.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.860 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:01.860 01:07:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:01.860 01:07:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:41:01.860 01:07:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:01.860 01:07:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@721 -- # xtrace_disable 00:41:01.860 01:07:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:01.860 01:07:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=108770 00:41:01.860 01:07:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:41:01.860 01:07:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 108770 00:41:01.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:01.860 01:07:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 108770 ']' 00:41:01.860 01:07:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:01.860 01:07:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:41:01.860 01:07:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:01.860 01:07:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:41:01.860 01:07:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:01.860 [2024-05-15 01:07:05.073974] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:41:01.860 [2024-05-15 01:07:05.074077] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:02.118 [2024-05-15 01:07:05.214752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:02.118 [2024-05-15 01:07:05.300413] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:02.118 [2024-05-15 01:07:05.300473] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:02.118 [2024-05-15 01:07:05.300488] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:02.118 [2024-05-15 01:07:05.300499] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:02.118 [2024-05-15 01:07:05.300509] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:02.118 [2024-05-15 01:07:05.300537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@727 -- # xtrace_disable 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:03.051 [2024-05-15 01:07:06.117556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:03.051 [2024-05-15 01:07:06.125484] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:41:03.051 [2024-05-15 01:07:06.125734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:41:03.051 null0 00:41:03.051 [2024-05-15 01:07:06.157629] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=108819 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 108819 /tmp/host.sock 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 108819 ']' 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:41:03.051 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:41:03.051 01:07:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:03.051 [2024-05-15 01:07:06.260959] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:41:03.051 [2024-05-15 01:07:06.261081] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108819 ] 00:41:03.309 [2024-05-15 01:07:06.413279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:03.309 [2024-05-15 01:07:06.504381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:04.242 01:07:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:05.177 [2024-05-15 01:07:08.360909] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:41:05.177 [2024-05-15 01:07:08.360953] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:41:05.177 [2024-05-15 01:07:08.360972] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:41:05.177 [2024-05-15 01:07:08.447090] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:41:05.435 [2024-05-15 01:07:08.503268] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:41:05.435 [2024-05-15 01:07:08.503354] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:41:05.435 [2024-05-15 01:07:08.503386] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:41:05.435 [2024-05-15 01:07:08.503405] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:41:05.435 [2024-05-15 01:07:08.503434] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:05.435 [2024-05-15 01:07:08.509389] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2213e30 was disconnected and freed. delete nvme_qpair. 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:05.435 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:06.395 01:07:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:06.395 01:07:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:06.395 01:07:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:06.395 01:07:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:06.395 01:07:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:06.395 01:07:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:06.395 01:07:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:06.395 01:07:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:06.654 01:07:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:06.654 01:07:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:07.588 01:07:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:07.588 01:07:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:07.588 01:07:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:07.588 01:07:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.588 01:07:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:07.588 01:07:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:07.588 01:07:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:07.588 01:07:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.588 01:07:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:07.588 01:07:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:08.520 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:08.520 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:08.520 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:08.520 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:08.520 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:08.520 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:08.520 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:08.520 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:08.778 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:08.779 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:09.735 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:09.735 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:09.735 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:09.735 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:09.735 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:09.735 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:09.735 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:09.735 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:09.735 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:09.735 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:10.724 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:10.724 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:10.724 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:10.724 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:10.724 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:10.724 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:10.724 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:10.724 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:10.724 [2024-05-15 01:07:13.931129] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:41:10.724 [2024-05-15 01:07:13.931204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:10.724 [2024-05-15 01:07:13.931221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:10.724 [2024-05-15 01:07:13.931234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:10.724 [2024-05-15 01:07:13.931244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:10.724 [2024-05-15 01:07:13.931254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:10.724 [2024-05-15 01:07:13.931264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:10.724 [2024-05-15 01:07:13.931274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:10.724 [2024-05-15 01:07:13.931292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:10.724 [2024-05-15 01:07:13.931303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:41:10.724 [2024-05-15 01:07:13.931312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:10.724 [2024-05-15 01:07:13.931321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db180 is same with the state(5) to be set 00:41:10.724 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:10.724 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:10.724 [2024-05-15 01:07:13.941122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21db180 (9): Bad file descriptor 00:41:10.724 [2024-05-15 01:07:13.951144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:11.661 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:11.661 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:11.661 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:11.661 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:11.661 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:11.661 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:11.919 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:11.919 [2024-05-15 01:07:14.979630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:41:12.853 [2024-05-15 01:07:16.002756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:41:12.853 [2024-05-15 01:07:16.002904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21db180 with addr=10.0.0.2, port=4420 00:41:12.853 [2024-05-15 01:07:16.002942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21db180 is same with the state(5) to be set 00:41:12.853 [2024-05-15 01:07:16.003930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21db180 (9): Bad file descriptor 00:41:12.853 [2024-05-15 01:07:16.004009] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:12.853 [2024-05-15 01:07:16.004070] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:41:12.853 [2024-05-15 01:07:16.004152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:12.853 [2024-05-15 01:07:16.004182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.853 [2024-05-15 01:07:16.004208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:12.853 [2024-05-15 01:07:16.004229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.853 [2024-05-15 01:07:16.004263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:12.853 [2024-05-15 01:07:16.004283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.853 [2024-05-15 01:07:16.004305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:12.853 [2024-05-15 01:07:16.004325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.853 [2024-05-15 01:07:16.004347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:41:12.853 [2024-05-15 01:07:16.004367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.853 [2024-05-15 01:07:16.004387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:41:12.853 [2024-05-15 01:07:16.004447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21da610 (9): Bad file descriptor 00:41:12.853 [2024-05-15 01:07:16.005450] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:41:12.853 [2024-05-15 01:07:16.005501] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:41:12.853 01:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:12.853 01:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:12.854 01:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:13.787 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:13.787 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:13.787 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:13.787 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:13.787 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:13.787 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:13.787 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:13.787 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:41:14.045 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:14.981 [2024-05-15 01:07:18.012717] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:41:14.981 [2024-05-15 01:07:18.012759] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:41:14.981 [2024-05-15 01:07:18.012778] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:41:14.981 [2024-05-15 01:07:18.098860] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:41:14.981 [2024-05-15 01:07:18.154180] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:41:14.981 [2024-05-15 01:07:18.154249] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:41:14.981 [2024-05-15 01:07:18.154275] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:41:14.981 [2024-05-15 01:07:18.154293] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:41:14.981 [2024-05-15 01:07:18.154303] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:41:14.981 [2024-05-15 01:07:18.161231] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x21ca780 was disconnected and freed. delete nvme_qpair. 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 108819 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 108819 ']' 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 108819 00:41:14.981 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:41:14.982 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:41:14.982 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 108819 00:41:14.982 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:41:14.982 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:41:14.982 killing process with pid 108819 00:41:14.982 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 108819' 00:41:14.982 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 108819 00:41:14.982 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 108819 00:41:15.240 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:41:15.240 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:15.240 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:41:15.240 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:15.240 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:41:15.240 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:15.240 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:15.240 rmmod nvme_tcp 00:41:15.240 rmmod nvme_fabrics 00:41:15.498 rmmod nvme_keyring 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 108770 ']' 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 108770 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 108770 ']' 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 108770 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 108770 00:41:15.498 killing process with pid 108770 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 108770' 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 108770 00:41:15.498 [2024-05-15 01:07:18.584689] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:41:15.498 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 108770 00:41:15.758 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:15.758 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:15.758 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:15.758 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:15.758 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:15.758 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:15.758 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:15.758 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:15.758 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:41:15.758 00:41:15.758 real 0m14.285s 00:41:15.758 user 0m24.589s 00:41:15.758 sys 0m1.619s 00:41:15.758 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:41:15.758 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:15.758 ************************************ 00:41:15.758 END TEST nvmf_discovery_remove_ifc 00:41:15.758 ************************************ 00:41:15.758 01:07:18 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:41:15.758 01:07:18 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:41:15.758 01:07:18 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:41:15.758 01:07:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:15.758 ************************************ 00:41:15.758 START TEST nvmf_identify_kernel_target 00:41:15.759 ************************************ 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:41:15.759 * Looking for test storage... 00:41:15.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:15.759 01:07:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:41:15.759 Cannot find device "nvmf_tgt_br" 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:41:15.759 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:41:16.018 Cannot find device "nvmf_tgt_br2" 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:41:16.018 Cannot find device "nvmf_tgt_br" 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:41:16.018 Cannot find device "nvmf_tgt_br2" 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:16.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:16.018 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:41:16.018 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:16.019 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:16.019 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:16.019 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:41:16.019 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:41:16.019 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:41:16.019 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:16.019 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:16.019 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:16.019 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:16.019 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:41:16.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:16.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:41:16.278 00:41:16.278 --- 10.0.0.2 ping statistics --- 00:41:16.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:16.278 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:41:16.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:16.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:41:16.278 00:41:16.278 --- 10.0.0.3 ping statistics --- 00:41:16.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:16.278 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:16.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:16.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:41:16.278 00:41:16.278 --- 10.0.0.1 ping statistics --- 00:41:16.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:16.278 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:16.278 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:41:16.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:16.537 Waiting for block devices as requested 00:41:16.537 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:41:16.796 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:41:16.796 No valid GPT data, bailing 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n2 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:41:16.796 01:07:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:41:16.796 No valid GPT data, bailing 00:41:16.796 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:41:16.796 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:41:16.796 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:41:16.796 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:41:16.796 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:16.796 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:41:16.796 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:41:16.796 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n3 00:41:16.796 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:41:16.796 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:41:16.796 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:41:16.796 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:41:16.796 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:41:16.796 No valid GPT data, bailing 00:41:17.054 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:41:17.054 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:41:17.054 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:41:17.054 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:41:17.054 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:17.054 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:41:17.055 No valid GPT data, bailing 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -a 10.0.0.1 -t tcp -s 4420 00:41:17.055 00:41:17.055 Discovery Log Number of Records 2, Generation counter 2 00:41:17.055 =====Discovery Log Entry 0====== 00:41:17.055 trtype: tcp 00:41:17.055 adrfam: ipv4 00:41:17.055 subtype: current discovery subsystem 00:41:17.055 treq: not specified, sq flow control disable supported 00:41:17.055 portid: 1 00:41:17.055 trsvcid: 4420 00:41:17.055 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:17.055 traddr: 10.0.0.1 00:41:17.055 eflags: none 00:41:17.055 sectype: none 00:41:17.055 =====Discovery Log Entry 1====== 00:41:17.055 trtype: tcp 00:41:17.055 adrfam: ipv4 00:41:17.055 subtype: nvme subsystem 00:41:17.055 treq: not specified, sq flow control disable supported 00:41:17.055 portid: 1 00:41:17.055 trsvcid: 4420 00:41:17.055 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:17.055 traddr: 10.0.0.1 00:41:17.055 eflags: none 00:41:17.055 sectype: none 00:41:17.055 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:41:17.055 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:41:17.314 ===================================================== 00:41:17.314 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:41:17.314 ===================================================== 00:41:17.314 Controller Capabilities/Features 00:41:17.314 ================================ 00:41:17.314 Vendor ID: 0000 00:41:17.314 Subsystem Vendor ID: 0000 00:41:17.314 Serial Number: 415b7406b5d49007c23a 00:41:17.314 Model Number: Linux 00:41:17.314 Firmware Version: 6.7.0-68 00:41:17.314 Recommended Arb Burst: 0 00:41:17.314 IEEE OUI Identifier: 00 00 00 00:41:17.314 Multi-path I/O 00:41:17.314 May have multiple subsystem ports: No 00:41:17.314 May have multiple controllers: No 00:41:17.314 Associated with SR-IOV VF: No 00:41:17.314 Max Data Transfer Size: Unlimited 00:41:17.314 Max Number of Namespaces: 0 00:41:17.314 Max Number of I/O Queues: 1024 00:41:17.314 NVMe Specification Version (VS): 1.3 00:41:17.314 NVMe Specification Version (Identify): 1.3 00:41:17.314 Maximum Queue Entries: 1024 00:41:17.314 Contiguous Queues Required: No 00:41:17.314 Arbitration Mechanisms Supported 00:41:17.314 Weighted Round Robin: Not Supported 00:41:17.314 Vendor Specific: Not Supported 00:41:17.314 Reset Timeout: 7500 ms 00:41:17.314 Doorbell Stride: 4 bytes 00:41:17.314 NVM Subsystem Reset: Not Supported 00:41:17.314 Command Sets Supported 00:41:17.314 NVM Command Set: Supported 00:41:17.314 Boot Partition: Not Supported 00:41:17.314 Memory Page Size Minimum: 4096 bytes 00:41:17.314 Memory Page Size Maximum: 4096 bytes 00:41:17.314 Persistent Memory Region: Not Supported 00:41:17.314 Optional Asynchronous Events Supported 00:41:17.314 Namespace Attribute Notices: Not Supported 00:41:17.314 Firmware Activation Notices: Not Supported 00:41:17.314 ANA Change Notices: Not Supported 00:41:17.314 PLE Aggregate Log Change Notices: Not Supported 00:41:17.314 LBA Status Info Alert Notices: Not Supported 00:41:17.314 EGE Aggregate Log Change Notices: Not Supported 00:41:17.314 Normal NVM Subsystem Shutdown event: Not Supported 00:41:17.314 Zone Descriptor Change Notices: Not Supported 00:41:17.314 Discovery Log Change Notices: Supported 00:41:17.314 Controller Attributes 00:41:17.314 128-bit Host Identifier: Not Supported 00:41:17.314 Non-Operational Permissive Mode: Not Supported 00:41:17.314 NVM Sets: Not Supported 00:41:17.314 Read Recovery Levels: Not Supported 00:41:17.314 Endurance Groups: Not Supported 00:41:17.314 Predictable Latency Mode: Not Supported 00:41:17.314 Traffic Based Keep ALive: Not Supported 00:41:17.314 Namespace Granularity: Not Supported 00:41:17.314 SQ Associations: Not Supported 00:41:17.314 UUID List: Not Supported 00:41:17.314 Multi-Domain Subsystem: Not Supported 00:41:17.314 Fixed Capacity Management: Not Supported 00:41:17.314 Variable Capacity Management: Not Supported 00:41:17.314 Delete Endurance Group: Not Supported 00:41:17.314 Delete NVM Set: Not Supported 00:41:17.314 Extended LBA Formats Supported: Not Supported 00:41:17.314 Flexible Data Placement Supported: Not Supported 00:41:17.314 00:41:17.314 Controller Memory Buffer Support 00:41:17.314 ================================ 00:41:17.314 Supported: No 00:41:17.314 00:41:17.314 Persistent Memory Region Support 00:41:17.314 ================================ 00:41:17.314 Supported: No 00:41:17.314 00:41:17.314 Admin Command Set Attributes 00:41:17.314 ============================ 00:41:17.314 Security Send/Receive: Not Supported 00:41:17.314 Format NVM: Not Supported 00:41:17.314 Firmware Activate/Download: Not Supported 00:41:17.314 Namespace Management: Not Supported 00:41:17.314 Device Self-Test: Not Supported 00:41:17.314 Directives: Not Supported 00:41:17.314 NVMe-MI: Not Supported 00:41:17.314 Virtualization Management: Not Supported 00:41:17.314 Doorbell Buffer Config: Not Supported 00:41:17.314 Get LBA Status Capability: Not Supported 00:41:17.314 Command & Feature Lockdown Capability: Not Supported 00:41:17.314 Abort Command Limit: 1 00:41:17.314 Async Event Request Limit: 1 00:41:17.314 Number of Firmware Slots: N/A 00:41:17.314 Firmware Slot 1 Read-Only: N/A 00:41:17.314 Firmware Activation Without Reset: N/A 00:41:17.314 Multiple Update Detection Support: N/A 00:41:17.314 Firmware Update Granularity: No Information Provided 00:41:17.314 Per-Namespace SMART Log: No 00:41:17.314 Asymmetric Namespace Access Log Page: Not Supported 00:41:17.314 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:41:17.314 Command Effects Log Page: Not Supported 00:41:17.314 Get Log Page Extended Data: Supported 00:41:17.314 Telemetry Log Pages: Not Supported 00:41:17.314 Persistent Event Log Pages: Not Supported 00:41:17.314 Supported Log Pages Log Page: May Support 00:41:17.314 Commands Supported & Effects Log Page: Not Supported 00:41:17.314 Feature Identifiers & Effects Log Page:May Support 00:41:17.314 NVMe-MI Commands & Effects Log Page: May Support 00:41:17.314 Data Area 4 for Telemetry Log: Not Supported 00:41:17.314 Error Log Page Entries Supported: 1 00:41:17.314 Keep Alive: Not Supported 00:41:17.314 00:41:17.314 NVM Command Set Attributes 00:41:17.314 ========================== 00:41:17.314 Submission Queue Entry Size 00:41:17.314 Max: 1 00:41:17.314 Min: 1 00:41:17.314 Completion Queue Entry Size 00:41:17.314 Max: 1 00:41:17.314 Min: 1 00:41:17.314 Number of Namespaces: 0 00:41:17.314 Compare Command: Not Supported 00:41:17.314 Write Uncorrectable Command: Not Supported 00:41:17.314 Dataset Management Command: Not Supported 00:41:17.314 Write Zeroes Command: Not Supported 00:41:17.314 Set Features Save Field: Not Supported 00:41:17.314 Reservations: Not Supported 00:41:17.314 Timestamp: Not Supported 00:41:17.314 Copy: Not Supported 00:41:17.314 Volatile Write Cache: Not Present 00:41:17.314 Atomic Write Unit (Normal): 1 00:41:17.314 Atomic Write Unit (PFail): 1 00:41:17.314 Atomic Compare & Write Unit: 1 00:41:17.314 Fused Compare & Write: Not Supported 00:41:17.314 Scatter-Gather List 00:41:17.314 SGL Command Set: Supported 00:41:17.314 SGL Keyed: Not Supported 00:41:17.314 SGL Bit Bucket Descriptor: Not Supported 00:41:17.314 SGL Metadata Pointer: Not Supported 00:41:17.314 Oversized SGL: Not Supported 00:41:17.314 SGL Metadata Address: Not Supported 00:41:17.314 SGL Offset: Supported 00:41:17.314 Transport SGL Data Block: Not Supported 00:41:17.314 Replay Protected Memory Block: Not Supported 00:41:17.314 00:41:17.314 Firmware Slot Information 00:41:17.314 ========================= 00:41:17.314 Active slot: 0 00:41:17.314 00:41:17.314 00:41:17.314 Error Log 00:41:17.314 ========= 00:41:17.314 00:41:17.314 Active Namespaces 00:41:17.314 ================= 00:41:17.314 Discovery Log Page 00:41:17.314 ================== 00:41:17.314 Generation Counter: 2 00:41:17.314 Number of Records: 2 00:41:17.314 Record Format: 0 00:41:17.314 00:41:17.314 Discovery Log Entry 0 00:41:17.314 ---------------------- 00:41:17.314 Transport Type: 3 (TCP) 00:41:17.314 Address Family: 1 (IPv4) 00:41:17.314 Subsystem Type: 3 (Current Discovery Subsystem) 00:41:17.314 Entry Flags: 00:41:17.314 Duplicate Returned Information: 0 00:41:17.314 Explicit Persistent Connection Support for Discovery: 0 00:41:17.314 Transport Requirements: 00:41:17.314 Secure Channel: Not Specified 00:41:17.314 Port ID: 1 (0x0001) 00:41:17.314 Controller ID: 65535 (0xffff) 00:41:17.314 Admin Max SQ Size: 32 00:41:17.315 Transport Service Identifier: 4420 00:41:17.315 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:41:17.315 Transport Address: 10.0.0.1 00:41:17.315 Discovery Log Entry 1 00:41:17.315 ---------------------- 00:41:17.315 Transport Type: 3 (TCP) 00:41:17.315 Address Family: 1 (IPv4) 00:41:17.315 Subsystem Type: 2 (NVM Subsystem) 00:41:17.315 Entry Flags: 00:41:17.315 Duplicate Returned Information: 0 00:41:17.315 Explicit Persistent Connection Support for Discovery: 0 00:41:17.315 Transport Requirements: 00:41:17.315 Secure Channel: Not Specified 00:41:17.315 Port ID: 1 (0x0001) 00:41:17.315 Controller ID: 65535 (0xffff) 00:41:17.315 Admin Max SQ Size: 32 00:41:17.315 Transport Service Identifier: 4420 00:41:17.315 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:41:17.315 Transport Address: 10.0.0.1 00:41:17.315 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:17.315 get_feature(0x01) failed 00:41:17.315 get_feature(0x02) failed 00:41:17.315 get_feature(0x04) failed 00:41:17.315 ===================================================== 00:41:17.315 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:17.315 ===================================================== 00:41:17.315 Controller Capabilities/Features 00:41:17.315 ================================ 00:41:17.315 Vendor ID: 0000 00:41:17.315 Subsystem Vendor ID: 0000 00:41:17.315 Serial Number: 5c71da101c2929e0b27f 00:41:17.315 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:41:17.315 Firmware Version: 6.7.0-68 00:41:17.315 Recommended Arb Burst: 6 00:41:17.315 IEEE OUI Identifier: 00 00 00 00:41:17.315 Multi-path I/O 00:41:17.315 May have multiple subsystem ports: Yes 00:41:17.315 May have multiple controllers: Yes 00:41:17.315 Associated with SR-IOV VF: No 00:41:17.315 Max Data Transfer Size: Unlimited 00:41:17.315 Max Number of Namespaces: 1024 00:41:17.315 Max Number of I/O Queues: 128 00:41:17.315 NVMe Specification Version (VS): 1.3 00:41:17.315 NVMe Specification Version (Identify): 1.3 00:41:17.315 Maximum Queue Entries: 1024 00:41:17.315 Contiguous Queues Required: No 00:41:17.315 Arbitration Mechanisms Supported 00:41:17.315 Weighted Round Robin: Not Supported 00:41:17.315 Vendor Specific: Not Supported 00:41:17.315 Reset Timeout: 7500 ms 00:41:17.315 Doorbell Stride: 4 bytes 00:41:17.315 NVM Subsystem Reset: Not Supported 00:41:17.315 Command Sets Supported 00:41:17.315 NVM Command Set: Supported 00:41:17.315 Boot Partition: Not Supported 00:41:17.315 Memory Page Size Minimum: 4096 bytes 00:41:17.315 Memory Page Size Maximum: 4096 bytes 00:41:17.315 Persistent Memory Region: Not Supported 00:41:17.315 Optional Asynchronous Events Supported 00:41:17.315 Namespace Attribute Notices: Supported 00:41:17.315 Firmware Activation Notices: Not Supported 00:41:17.315 ANA Change Notices: Supported 00:41:17.315 PLE Aggregate Log Change Notices: Not Supported 00:41:17.315 LBA Status Info Alert Notices: Not Supported 00:41:17.315 EGE Aggregate Log Change Notices: Not Supported 00:41:17.315 Normal NVM Subsystem Shutdown event: Not Supported 00:41:17.315 Zone Descriptor Change Notices: Not Supported 00:41:17.315 Discovery Log Change Notices: Not Supported 00:41:17.315 Controller Attributes 00:41:17.315 128-bit Host Identifier: Supported 00:41:17.315 Non-Operational Permissive Mode: Not Supported 00:41:17.315 NVM Sets: Not Supported 00:41:17.315 Read Recovery Levels: Not Supported 00:41:17.315 Endurance Groups: Not Supported 00:41:17.315 Predictable Latency Mode: Not Supported 00:41:17.315 Traffic Based Keep ALive: Supported 00:41:17.315 Namespace Granularity: Not Supported 00:41:17.315 SQ Associations: Not Supported 00:41:17.315 UUID List: Not Supported 00:41:17.315 Multi-Domain Subsystem: Not Supported 00:41:17.315 Fixed Capacity Management: Not Supported 00:41:17.315 Variable Capacity Management: Not Supported 00:41:17.315 Delete Endurance Group: Not Supported 00:41:17.315 Delete NVM Set: Not Supported 00:41:17.315 Extended LBA Formats Supported: Not Supported 00:41:17.315 Flexible Data Placement Supported: Not Supported 00:41:17.315 00:41:17.315 Controller Memory Buffer Support 00:41:17.315 ================================ 00:41:17.315 Supported: No 00:41:17.315 00:41:17.315 Persistent Memory Region Support 00:41:17.315 ================================ 00:41:17.315 Supported: No 00:41:17.315 00:41:17.315 Admin Command Set Attributes 00:41:17.315 ============================ 00:41:17.315 Security Send/Receive: Not Supported 00:41:17.315 Format NVM: Not Supported 00:41:17.315 Firmware Activate/Download: Not Supported 00:41:17.315 Namespace Management: Not Supported 00:41:17.315 Device Self-Test: Not Supported 00:41:17.315 Directives: Not Supported 00:41:17.315 NVMe-MI: Not Supported 00:41:17.315 Virtualization Management: Not Supported 00:41:17.315 Doorbell Buffer Config: Not Supported 00:41:17.315 Get LBA Status Capability: Not Supported 00:41:17.315 Command & Feature Lockdown Capability: Not Supported 00:41:17.315 Abort Command Limit: 4 00:41:17.315 Async Event Request Limit: 4 00:41:17.315 Number of Firmware Slots: N/A 00:41:17.315 Firmware Slot 1 Read-Only: N/A 00:41:17.315 Firmware Activation Without Reset: N/A 00:41:17.315 Multiple Update Detection Support: N/A 00:41:17.315 Firmware Update Granularity: No Information Provided 00:41:17.315 Per-Namespace SMART Log: Yes 00:41:17.315 Asymmetric Namespace Access Log Page: Supported 00:41:17.315 ANA Transition Time : 10 sec 00:41:17.315 00:41:17.315 Asymmetric Namespace Access Capabilities 00:41:17.315 ANA Optimized State : Supported 00:41:17.315 ANA Non-Optimized State : Supported 00:41:17.315 ANA Inaccessible State : Supported 00:41:17.315 ANA Persistent Loss State : Supported 00:41:17.315 ANA Change State : Supported 00:41:17.315 ANAGRPID is not changed : No 00:41:17.315 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:41:17.315 00:41:17.315 ANA Group Identifier Maximum : 128 00:41:17.315 Number of ANA Group Identifiers : 128 00:41:17.315 Max Number of Allowed Namespaces : 1024 00:41:17.315 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:41:17.315 Command Effects Log Page: Supported 00:41:17.315 Get Log Page Extended Data: Supported 00:41:17.315 Telemetry Log Pages: Not Supported 00:41:17.315 Persistent Event Log Pages: Not Supported 00:41:17.315 Supported Log Pages Log Page: May Support 00:41:17.315 Commands Supported & Effects Log Page: Not Supported 00:41:17.315 Feature Identifiers & Effects Log Page:May Support 00:41:17.315 NVMe-MI Commands & Effects Log Page: May Support 00:41:17.315 Data Area 4 for Telemetry Log: Not Supported 00:41:17.315 Error Log Page Entries Supported: 128 00:41:17.315 Keep Alive: Supported 00:41:17.315 Keep Alive Granularity: 1000 ms 00:41:17.315 00:41:17.315 NVM Command Set Attributes 00:41:17.315 ========================== 00:41:17.315 Submission Queue Entry Size 00:41:17.315 Max: 64 00:41:17.315 Min: 64 00:41:17.315 Completion Queue Entry Size 00:41:17.315 Max: 16 00:41:17.315 Min: 16 00:41:17.315 Number of Namespaces: 1024 00:41:17.315 Compare Command: Not Supported 00:41:17.315 Write Uncorrectable Command: Not Supported 00:41:17.315 Dataset Management Command: Supported 00:41:17.315 Write Zeroes Command: Supported 00:41:17.315 Set Features Save Field: Not Supported 00:41:17.315 Reservations: Not Supported 00:41:17.315 Timestamp: Not Supported 00:41:17.315 Copy: Not Supported 00:41:17.315 Volatile Write Cache: Present 00:41:17.315 Atomic Write Unit (Normal): 1 00:41:17.315 Atomic Write Unit (PFail): 1 00:41:17.315 Atomic Compare & Write Unit: 1 00:41:17.315 Fused Compare & Write: Not Supported 00:41:17.315 Scatter-Gather List 00:41:17.315 SGL Command Set: Supported 00:41:17.315 SGL Keyed: Not Supported 00:41:17.315 SGL Bit Bucket Descriptor: Not Supported 00:41:17.315 SGL Metadata Pointer: Not Supported 00:41:17.315 Oversized SGL: Not Supported 00:41:17.315 SGL Metadata Address: Not Supported 00:41:17.315 SGL Offset: Supported 00:41:17.315 Transport SGL Data Block: Not Supported 00:41:17.315 Replay Protected Memory Block: Not Supported 00:41:17.315 00:41:17.315 Firmware Slot Information 00:41:17.315 ========================= 00:41:17.315 Active slot: 0 00:41:17.315 00:41:17.315 Asymmetric Namespace Access 00:41:17.316 =========================== 00:41:17.316 Change Count : 0 00:41:17.316 Number of ANA Group Descriptors : 1 00:41:17.316 ANA Group Descriptor : 0 00:41:17.316 ANA Group ID : 1 00:41:17.316 Number of NSID Values : 1 00:41:17.316 Change Count : 0 00:41:17.316 ANA State : 1 00:41:17.316 Namespace Identifier : 1 00:41:17.316 00:41:17.316 Commands Supported and Effects 00:41:17.316 ============================== 00:41:17.316 Admin Commands 00:41:17.316 -------------- 00:41:17.316 Get Log Page (02h): Supported 00:41:17.316 Identify (06h): Supported 00:41:17.316 Abort (08h): Supported 00:41:17.316 Set Features (09h): Supported 00:41:17.316 Get Features (0Ah): Supported 00:41:17.316 Asynchronous Event Request (0Ch): Supported 00:41:17.316 Keep Alive (18h): Supported 00:41:17.316 I/O Commands 00:41:17.316 ------------ 00:41:17.316 Flush (00h): Supported 00:41:17.316 Write (01h): Supported LBA-Change 00:41:17.316 Read (02h): Supported 00:41:17.316 Write Zeroes (08h): Supported LBA-Change 00:41:17.316 Dataset Management (09h): Supported 00:41:17.316 00:41:17.316 Error Log 00:41:17.316 ========= 00:41:17.316 Entry: 0 00:41:17.316 Error Count: 0x3 00:41:17.316 Submission Queue Id: 0x0 00:41:17.316 Command Id: 0x5 00:41:17.316 Phase Bit: 0 00:41:17.316 Status Code: 0x2 00:41:17.316 Status Code Type: 0x0 00:41:17.316 Do Not Retry: 1 00:41:17.316 Error Location: 0x28 00:41:17.316 LBA: 0x0 00:41:17.316 Namespace: 0x0 00:41:17.316 Vendor Log Page: 0x0 00:41:17.316 ----------- 00:41:17.316 Entry: 1 00:41:17.316 Error Count: 0x2 00:41:17.316 Submission Queue Id: 0x0 00:41:17.316 Command Id: 0x5 00:41:17.316 Phase Bit: 0 00:41:17.316 Status Code: 0x2 00:41:17.316 Status Code Type: 0x0 00:41:17.316 Do Not Retry: 1 00:41:17.316 Error Location: 0x28 00:41:17.316 LBA: 0x0 00:41:17.316 Namespace: 0x0 00:41:17.316 Vendor Log Page: 0x0 00:41:17.316 ----------- 00:41:17.316 Entry: 2 00:41:17.316 Error Count: 0x1 00:41:17.316 Submission Queue Id: 0x0 00:41:17.316 Command Id: 0x4 00:41:17.316 Phase Bit: 0 00:41:17.316 Status Code: 0x2 00:41:17.316 Status Code Type: 0x0 00:41:17.316 Do Not Retry: 1 00:41:17.316 Error Location: 0x28 00:41:17.316 LBA: 0x0 00:41:17.316 Namespace: 0x0 00:41:17.316 Vendor Log Page: 0x0 00:41:17.316 00:41:17.316 Number of Queues 00:41:17.316 ================ 00:41:17.316 Number of I/O Submission Queues: 128 00:41:17.316 Number of I/O Completion Queues: 128 00:41:17.316 00:41:17.316 ZNS Specific Controller Data 00:41:17.316 ============================ 00:41:17.316 Zone Append Size Limit: 0 00:41:17.316 00:41:17.316 00:41:17.316 Active Namespaces 00:41:17.316 ================= 00:41:17.316 get_feature(0x05) failed 00:41:17.316 Namespace ID:1 00:41:17.316 Command Set Identifier: NVM (00h) 00:41:17.316 Deallocate: Supported 00:41:17.316 Deallocated/Unwritten Error: Not Supported 00:41:17.316 Deallocated Read Value: Unknown 00:41:17.316 Deallocate in Write Zeroes: Not Supported 00:41:17.316 Deallocated Guard Field: 0xFFFF 00:41:17.316 Flush: Supported 00:41:17.316 Reservation: Not Supported 00:41:17.316 Namespace Sharing Capabilities: Multiple Controllers 00:41:17.316 Size (in LBAs): 1310720 (5GiB) 00:41:17.316 Capacity (in LBAs): 1310720 (5GiB) 00:41:17.316 Utilization (in LBAs): 1310720 (5GiB) 00:41:17.316 UUID: a4a647c0-7bd1-44a8-83a3-fa402b18da25 00:41:17.316 Thin Provisioning: Not Supported 00:41:17.316 Per-NS Atomic Units: Yes 00:41:17.316 Atomic Boundary Size (Normal): 0 00:41:17.316 Atomic Boundary Size (PFail): 0 00:41:17.316 Atomic Boundary Offset: 0 00:41:17.316 NGUID/EUI64 Never Reused: No 00:41:17.316 ANA group ID: 1 00:41:17.316 Namespace Write Protected: No 00:41:17.316 Number of LBA Formats: 1 00:41:17.316 Current LBA Format: LBA Format #00 00:41:17.316 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:41:17.316 00:41:17.316 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:41:17.316 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:17.316 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:41:17.575 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:17.575 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:41:17.575 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:17.575 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:17.575 rmmod nvme_tcp 00:41:17.575 rmmod nvme_fabrics 00:41:17.575 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:17.575 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:41:17.575 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:41:17.575 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:41:17.576 01:07:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:18.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:18.401 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:41:18.401 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:41:18.401 ************************************ 00:41:18.401 END TEST nvmf_identify_kernel_target 00:41:18.401 ************************************ 00:41:18.401 00:41:18.401 real 0m2.708s 00:41:18.401 user 0m0.909s 00:41:18.401 sys 0m1.305s 00:41:18.401 01:07:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:41:18.401 01:07:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:41:18.401 01:07:21 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:41:18.401 01:07:21 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:41:18.401 01:07:21 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:41:18.401 01:07:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:18.401 ************************************ 00:41:18.401 START TEST nvmf_auth_host 00:41:18.401 ************************************ 00:41:18.401 01:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:41:18.659 * Looking for test storage... 00:41:18.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:18.659 01:07:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:41:18.660 Cannot find device "nvmf_tgt_br" 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:41:18.660 Cannot find device "nvmf_tgt_br2" 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:41:18.660 Cannot find device "nvmf_tgt_br" 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:41:18.660 Cannot find device "nvmf_tgt_br2" 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:18.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:18.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:18.660 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:18.919 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:18.919 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:18.919 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:41:18.919 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:41:18.919 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:41:18.919 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:41:18.919 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:18.919 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:18.919 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:18.919 01:07:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:41:18.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:18.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:41:18.919 00:41:18.919 --- 10.0.0.2 ping statistics --- 00:41:18.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:18.919 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:41:18.919 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:18.919 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:41:18.919 00:41:18.919 --- 10.0.0.3 ping statistics --- 00:41:18.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:18.919 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:18.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:18.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:41:18.919 00:41:18.919 --- 10.0.0.1 ping statistics --- 00:41:18.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:18.919 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.919 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=109698 00:41:18.920 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:41:18.920 01:07:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 109698 00:41:18.920 01:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 109698 ']' 00:41:18.920 01:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:18.920 01:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:41:18.920 01:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:18.920 01:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:41:18.920 01:07:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e3c79834912dc7a26a3683a93daa3855 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.0kT 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e3c79834912dc7a26a3683a93daa3855 0 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e3c79834912dc7a26a3683a93daa3855 0 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e3c79834912dc7a26a3683a93daa3855 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:41:19.853 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.0kT 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.0kT 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.0kT 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=40302399d3f39cd29b81fe7ca73fc9e7a6cb1420dfd099d58a55f01c014f110d 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zMB 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 40302399d3f39cd29b81fe7ca73fc9e7a6cb1420dfd099d58a55f01c014f110d 3 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 40302399d3f39cd29b81fe7ca73fc9e7a6cb1420dfd099d58a55f01c014f110d 3 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=40302399d3f39cd29b81fe7ca73fc9e7a6cb1420dfd099d58a55f01c014f110d 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zMB 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zMB 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.zMB 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4df9eaf61bb8f0d25bc9f3ce19f24475d3db078d1935968a 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.S92 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4df9eaf61bb8f0d25bc9f3ce19f24475d3db078d1935968a 0 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4df9eaf61bb8f0d25bc9f3ce19f24475d3db078d1935968a 0 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4df9eaf61bb8f0d25bc9f3ce19f24475d3db078d1935968a 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.S92 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.S92 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.S92 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:20.111 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cf60aa97b0f7866593dc6792ee87e1f8c9d35cac03493931 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HaT 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cf60aa97b0f7866593dc6792ee87e1f8c9d35cac03493931 2 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cf60aa97b0f7866593dc6792ee87e1f8c9d35cac03493931 2 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cf60aa97b0f7866593dc6792ee87e1f8c9d35cac03493931 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HaT 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HaT 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.HaT 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0971f2d5a93dfa2f147aea6906fd0f27 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.F44 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0971f2d5a93dfa2f147aea6906fd0f27 1 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0971f2d5a93dfa2f147aea6906fd0f27 1 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0971f2d5a93dfa2f147aea6906fd0f27 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:41:20.112 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.F44 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.F44 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.F44 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d05fe1197967f01201aaa7ff8b350c34 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fAb 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d05fe1197967f01201aaa7ff8b350c34 1 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d05fe1197967f01201aaa7ff8b350c34 1 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d05fe1197967f01201aaa7ff8b350c34 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fAb 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fAb 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.fAb 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8ee0cf3f0db54111c5cf2a2b4c9f13905f9c405b49809400 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Jt0 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8ee0cf3f0db54111c5cf2a2b4c9f13905f9c405b49809400 2 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8ee0cf3f0db54111c5cf2a2b4c9f13905f9c405b49809400 2 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8ee0cf3f0db54111c5cf2a2b4c9f13905f9c405b49809400 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Jt0 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Jt0 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Jt0 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=66259c2e066d1db7377e4aa81a0762af 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.MA9 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 66259c2e066d1db7377e4aa81a0762af 0 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 66259c2e066d1db7377e4aa81a0762af 0 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=66259c2e066d1db7377e4aa81a0762af 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.MA9 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.MA9 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.MA9 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d4dab7c626b880f395131159bc90a32e368500d0cc417bb876b606ab88b0fb81 00:41:20.371 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.NDk 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d4dab7c626b880f395131159bc90a32e368500d0cc417bb876b606ab88b0fb81 3 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d4dab7c626b880f395131159bc90a32e368500d0cc417bb876b606ab88b0fb81 3 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d4dab7c626b880f395131159bc90a32e368500d0cc417bb876b606ab88b0fb81 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.NDk 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.NDk 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.NDk 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 109698 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 109698 ']' 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:20.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:41:20.629 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0kT 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.zMB ]] 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zMB 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.S92 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.HaT ]] 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HaT 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.888 01:07:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.888 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.888 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:20.888 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.F44 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.fAb ]] 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fAb 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Jt0 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.MA9 ]] 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.MA9 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.NDk 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:20.889 01:07:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:41:21.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:21.147 Waiting for block devices as requested 00:41:21.406 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:41:21.406 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:41:21.972 No valid GPT data, bailing 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n2 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:41:21.972 No valid GPT data, bailing 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:41:21.972 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:41:21.973 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n3 00:41:21.973 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:41:21.973 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:41:21.973 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:41:21.973 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:41:21.973 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:41:22.232 No valid GPT data, bailing 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:41:22.232 No valid GPT data, bailing 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -a 10.0.0.1 -t tcp -s 4420 00:41:22.232 00:41:22.232 Discovery Log Number of Records 2, Generation counter 2 00:41:22.232 =====Discovery Log Entry 0====== 00:41:22.232 trtype: tcp 00:41:22.232 adrfam: ipv4 00:41:22.232 subtype: current discovery subsystem 00:41:22.232 treq: not specified, sq flow control disable supported 00:41:22.232 portid: 1 00:41:22.232 trsvcid: 4420 00:41:22.232 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:22.232 traddr: 10.0.0.1 00:41:22.232 eflags: none 00:41:22.232 sectype: none 00:41:22.232 =====Discovery Log Entry 1====== 00:41:22.232 trtype: tcp 00:41:22.232 adrfam: ipv4 00:41:22.232 subtype: nvme subsystem 00:41:22.232 treq: not specified, sq flow control disable supported 00:41:22.232 portid: 1 00:41:22.232 trsvcid: 4420 00:41:22.232 subnqn: nqn.2024-02.io.spdk:cnode0 00:41:22.232 traddr: 10.0.0.1 00:41:22.232 eflags: none 00:41:22.232 sectype: none 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:22.232 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:22.233 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:22.233 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:22.233 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:22.491 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:22.491 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:22.491 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:22.491 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:41:22.491 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:41:22.491 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:41:22.491 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:22.491 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:41:22.491 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:22.491 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:41:22.491 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:22.491 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:22.491 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.492 nvme0n1 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.492 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.762 nvme0n1 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:22.762 01:07:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:22.763 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.763 01:07:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.763 nvme0n1 00:41:22.763 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.763 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:22.763 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:22.763 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.763 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:23.051 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.052 nvme0n1 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.052 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.311 nvme0n1 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:23.311 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:23.312 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:23.312 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.312 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.312 nvme0n1 00:41:23.312 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.312 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:23.312 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:23.312 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.312 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.312 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:23.571 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.830 01:07:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.830 nvme0n1 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.830 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.089 nvme0n1 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:24.089 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:24.090 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:24.090 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:24.090 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:24.090 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:24.090 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:24.090 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.090 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.349 nvme0n1 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.349 nvme0n1 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.349 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:24.608 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.609 nvme0n1 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:24.609 01:07:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.176 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.435 nvme0n1 00:41:25.435 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.435 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:25.435 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:25.435 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.435 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.435 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.435 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:25.435 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:25.435 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.435 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.694 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.694 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:25.694 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.695 nvme0n1 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.695 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.954 01:07:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.954 nvme0n1 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:25.954 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:26.213 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.214 nvme0n1 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.214 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:26.474 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.475 nvme0n1 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.475 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.734 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:26.734 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:26.734 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.734 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.734 01:07:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.734 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:26.735 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:26.735 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:41:26.735 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:26.735 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:26.735 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:26.735 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:26.735 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:26.735 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:26.735 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:26.735 01:07:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.637 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.896 nvme0n1 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:28.896 01:07:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:28.896 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:28.896 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:28.896 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.155 nvme0n1 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:29.155 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:29.156 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:29.156 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:29.156 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:29.156 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.156 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.722 nvme0n1 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.722 01:07:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.980 nvme0n1 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:29.980 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.981 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.239 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.497 nvme0n1 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.497 01:07:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.063 nvme0n1 00:41:31.063 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.063 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:31.063 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:31.063 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.063 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.063 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.063 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:31.063 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:31.063 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.063 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.322 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.900 nvme0n1 00:41:31.900 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.900 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:31.900 01:07:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:31.900 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.900 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.900 01:07:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.900 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.469 nvme0n1 00:41:32.469 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:32.469 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:32.469 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:32.469 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:32.470 01:07:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.405 nvme0n1 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:33.405 01:07:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:33.406 01:07:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:33.406 01:07:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:33.406 01:07:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:33.406 01:07:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:33.406 01:07:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:33.406 01:07:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:33.406 01:07:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:33.406 01:07:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:33.406 01:07:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:33.406 01:07:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.406 01:07:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.973 nvme0n1 00:41:33.973 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.973 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.974 nvme0n1 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.974 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.233 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:34.233 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:34.233 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.233 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.233 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.233 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.234 nvme0n1 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.234 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.493 nvme0n1 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.493 nvme0n1 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:34.493 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.494 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.494 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:34.494 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.752 nvme0n1 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:34.752 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:34.753 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:34.753 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:34.753 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:34.753 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:34.753 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:41:34.753 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:34.753 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:34.753 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:34.753 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:34.753 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:34.753 01:07:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:34.753 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.753 01:07:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:34.753 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.012 nvme0n1 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.012 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.271 nvme0n1 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:35.271 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.272 nvme0n1 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.272 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.531 nvme0n1 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:35.531 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.532 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.790 nvme0n1 00:41:35.790 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.790 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:35.790 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.790 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.790 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:35.790 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.790 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:35.790 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:35.790 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.790 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.790 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.790 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:35.790 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:35.791 01:07:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.050 nvme0n1 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.050 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.310 nvme0n1 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.310 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.569 nvme0n1 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.569 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.829 nvme0n1 00:41:36.829 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.829 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:36.829 01:07:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:36.829 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.829 01:07:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:36.829 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.088 nvme0n1 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.088 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.655 nvme0n1 00:41:37.655 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.655 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:37.655 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.655 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.655 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:37.655 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.655 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.656 01:07:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.914 nvme0n1 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:37.914 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:37.915 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:37.915 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:37.915 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:37.915 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:37.915 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:37.915 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:37.915 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:37.915 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:37.915 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.915 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.482 nvme0n1 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:38.482 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.741 nvme0n1 00:41:38.741 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:38.741 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:38.741 01:07:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:38.741 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:38.741 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.741 01:07:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:39.000 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.259 nvme0n1 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:39.259 01:07:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.827 nvme0n1 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:40.086 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:40.651 nvme0n1 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:40.651 01:07:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:40.652 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:40.652 01:07:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.219 nvme0n1 00:41:41.219 01:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:41.219 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:41.219 01:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:41.219 01:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.219 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:41.219 01:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:41.478 01:07:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.044 nvme0n1 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.044 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.609 nvme0n1 00:41:42.609 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.609 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:42.609 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.609 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:42.609 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.609 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.867 01:07:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.867 nvme0n1 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:42.867 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.868 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.126 nvme0n1 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.126 nvme0n1 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.126 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:43.385 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.386 nvme0n1 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.386 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.645 nvme0n1 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.645 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.904 nvme0n1 00:41:43.904 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.904 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:43.904 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.904 01:07:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:43.904 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.904 01:07:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.904 nvme0n1 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:43.904 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:43.905 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:43.905 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:43.905 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.905 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.163 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:44.163 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:44.163 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.163 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.164 nvme0n1 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.164 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.423 nvme0n1 00:41:44.423 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.423 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:44.423 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.423 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:44.423 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.423 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.423 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:44.423 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:44.423 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.423 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.423 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.423 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:44.423 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.424 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.682 nvme0n1 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.682 01:07:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.941 nvme0n1 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:44.941 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.200 nvme0n1 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.200 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.464 nvme0n1 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:45.464 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.465 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.724 nvme0n1 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.724 01:07:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.984 nvme0n1 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.984 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.553 nvme0n1 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:46.553 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.812 nvme0n1 00:41:46.812 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:46.812 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:46.812 01:07:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:46.812 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:46.812 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.812 01:07:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:46.812 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.379 nvme0n1 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:47.379 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.638 nvme0n1 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:47.638 01:07:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:48.204 nvme0n1 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNjNzk4MzQ5MTJkYzdhMjZhMzY4M2E5M2RhYTM4NTVj69PB: 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: ]] 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAzMDIzOTlkM2YzOWNkMjliODFmZTdjYTczZmM5ZTdhNmNiMTQyMGRmZDA5OWQ1OGE1NWYwMWMwMTRmMTEwZJ9FWM8=: 00:41:48.204 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:48.205 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:48.812 nvme0n1 00:41:48.812 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:48.812 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:48.812 01:07:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:48.812 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:48.812 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:48.812 01:07:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:48.812 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.378 nvme0n1 00:41:49.378 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:49.378 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:49.378 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:49.378 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:49.378 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.378 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDk3MWYyZDVhOTNkZmEyZjE0N2FlYTY5MDZmZDBmMjdGh1jF: 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: ]] 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDA1ZmUxMTk3OTY3ZjAxMjAxYWFhN2ZmOGIzNTBjMzT9SX44: 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:49.637 01:07:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.204 nvme0n1 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGVlMGNmM2YwZGI1NDExMWM1Y2YyYTJiNGM5ZjEzOTA1ZjljNDA1YjQ5ODA5NDAw+vHffA==: 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: ]] 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYyNTljMmUwNjZkMWRiNzM3N2U0YWE4MWEwNzYyYWap+EZR: 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.204 01:07:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.770 nvme0n1 00:41:50.770 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.770 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:50.770 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:50.770 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.770 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.770 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDRkYWI3YzYyNmI4ODBmMzk1MTMxMTU5YmM5MGEzMmUzNjg1MDBkMGNjNDE3YmI4NzZiNjA2YWI4OGIwZmI4MY/h8mc=: 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:51.029 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.596 nvme0n1 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRmOWVhZjYxYmI4ZjBkMjViYzlmM2NlMTlmMjQ0NzVkM2RiMDc4ZDE5MzU5NjhhiclTfA==: 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y2MGFhOTdiMGY3ODY2NTkzZGM2NzkyZWU4N2UxZjhjOWQzNWNhYzAzNDkzOTMxvPzrug==: 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.596 2024/05/15 01:07:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:41:51.596 request: 00:41:51.596 { 00:41:51.596 "method": "bdev_nvme_attach_controller", 00:41:51.596 "params": { 00:41:51.596 "name": "nvme0", 00:41:51.596 "trtype": "tcp", 00:41:51.596 "traddr": "10.0.0.1", 00:41:51.596 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:51.596 "adrfam": "ipv4", 00:41:51.596 "trsvcid": "4420", 00:41:51.596 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:41:51.596 } 00:41:51.596 } 00:41:51.596 Got JSON-RPC error response 00:41:51.596 GoRPCClient: error on JSON-RPC call 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:51.596 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.855 2024/05/15 01:07:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:41:51.855 request: 00:41:51.855 { 00:41:51.855 "method": "bdev_nvme_attach_controller", 00:41:51.855 "params": { 00:41:51.855 "name": "nvme0", 00:41:51.855 "trtype": "tcp", 00:41:51.855 "traddr": "10.0.0.1", 00:41:51.855 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:51.855 "adrfam": "ipv4", 00:41:51.855 "trsvcid": "4420", 00:41:51.855 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:51.855 "dhchap_key": "key2" 00:41:51.855 } 00:41:51.855 } 00:41:51.855 Got JSON-RPC error response 00:41:51.855 GoRPCClient: error on JSON-RPC call 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:41:51.855 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:51.856 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:51.856 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:51.856 01:07:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.856 2024/05/15 01:07:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey2 dhchap_key:key1 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:41:51.856 request: 00:41:51.856 { 00:41:51.856 "method": "bdev_nvme_attach_controller", 00:41:51.856 "params": { 00:41:51.856 "name": "nvme0", 00:41:51.856 "trtype": "tcp", 00:41:51.856 "traddr": "10.0.0.1", 00:41:51.856 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:51.856 "adrfam": "ipv4", 00:41:51.856 "trsvcid": "4420", 00:41:51.856 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:51.856 "dhchap_key": "key1", 00:41:51.856 "dhchap_ctrlr_key": "ckey2" 00:41:51.856 } 00:41:51.856 } 00:41:51.856 Got JSON-RPC error response 00:41:51.856 GoRPCClient: error on JSON-RPC call 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:51.856 rmmod nvme_tcp 00:41:51.856 rmmod nvme_fabrics 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 109698 ']' 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 109698 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' -z 109698 ']' 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # kill -0 109698 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # uname 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 109698 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:41:51.856 killing process with pid 109698 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 109698' 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # kill 109698 00:41:51.856 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@971 -- # wait 109698 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:41:52.115 01:07:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:53.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:53.050 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:41:53.050 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:41:53.050 01:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.0kT /tmp/spdk.key-null.S92 /tmp/spdk.key-sha256.F44 /tmp/spdk.key-sha384.Jt0 /tmp/spdk.key-sha512.NDk /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:41:53.050 01:07:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:53.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:53.308 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:41:53.308 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:41:53.568 00:41:53.568 real 0m34.979s 00:41:53.568 user 0m31.099s 00:41:53.568 sys 0m3.594s 00:41:53.568 01:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:41:53.568 01:07:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.568 ************************************ 00:41:53.568 END TEST nvmf_auth_host 00:41:53.568 ************************************ 00:41:53.568 01:07:56 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:41:53.568 01:07:56 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:41:53.568 01:07:56 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:41:53.568 01:07:56 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:41:53.568 01:07:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:53.568 ************************************ 00:41:53.568 START TEST nvmf_digest 00:41:53.568 ************************************ 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:41:53.568 * Looking for test storage... 00:41:53.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:53.568 01:07:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:41:53.569 Cannot find device "nvmf_tgt_br" 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:41:53.569 Cannot find device "nvmf_tgt_br2" 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:41:53.569 Cannot find device "nvmf_tgt_br" 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:41:53.569 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:41:53.828 Cannot find device "nvmf_tgt_br2" 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:53.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:53.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:53.828 01:07:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:41:53.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:53.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:41:53.828 00:41:53.828 --- 10.0.0.2 ping statistics --- 00:41:53.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:53.828 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:41:53.828 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:41:54.087 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:54.087 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:41:54.087 00:41:54.087 --- 10.0.0.3 ping statistics --- 00:41:54.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:54.087 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:54.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:54.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:41:54.087 00:41:54.087 --- 10.0.0.1 ping statistics --- 00:41:54.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:54.087 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:41:54.087 ************************************ 00:41:54.087 START TEST nvmf_digest_clean 00:41:54.087 ************************************ 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # run_digest 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:41:54.087 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@721 -- # xtrace_disable 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=111292 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 111292 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 111292 ']' 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:41:54.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:41:54.088 01:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:54.088 [2024-05-15 01:07:57.214489] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:41:54.088 [2024-05-15 01:07:57.214611] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:54.088 [2024-05-15 01:07:57.357186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:54.347 [2024-05-15 01:07:57.468133] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:54.347 [2024-05-15 01:07:57.468200] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:54.347 [2024-05-15 01:07:57.468215] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:54.347 [2024-05-15 01:07:57.468226] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:54.347 [2024-05-15 01:07:57.468236] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:54.347 [2024-05-15 01:07:57.468271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:54.959 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:41:54.959 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:41:54.959 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:54.959 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@727 -- # xtrace_disable 00:41:54.959 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:55.218 null0 00:41:55.218 [2024-05-15 01:07:58.373417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:55.218 [2024-05-15 01:07:58.397356] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:41:55.218 [2024-05-15 01:07:58.397631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111342 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111342 /var/tmp/bperf.sock 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 111342 ']' 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:41:55.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:41:55.218 01:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:55.218 [2024-05-15 01:07:58.453340] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:41:55.218 [2024-05-15 01:07:58.453438] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111342 ] 00:41:55.477 [2024-05-15 01:07:58.588548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:55.477 [2024-05-15 01:07:58.680513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:56.412 01:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:41:56.412 01:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:41:56.412 01:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:41:56.412 01:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:41:56.412 01:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:56.670 01:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:56.670 01:07:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:56.928 nvme0n1 00:41:56.928 01:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:41:56.928 01:08:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:56.928 Running I/O for 2 seconds... 00:41:59.460 00:41:59.460 Latency(us) 00:41:59.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:59.460 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:41:59.460 nvme0n1 : 2.00 18345.98 71.66 0.00 0.00 6968.38 3589.59 17873.45 00:41:59.460 =================================================================================================================== 00:41:59.460 Total : 18345.98 71.66 0.00 0.00 6968.38 3589.59 17873.45 00:41:59.460 0 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:41:59.460 | select(.opcode=="crc32c") 00:41:59.460 | "\(.module_name) \(.executed)"' 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111342 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 111342 ']' 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 111342 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 111342 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:41:59.460 killing process with pid 111342 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 111342' 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 111342 00:41:59.460 Received shutdown signal, test time was about 2.000000 seconds 00:41:59.460 00:41:59.460 Latency(us) 00:41:59.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:59.460 =================================================================================================================== 00:41:59.460 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 111342 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111427 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111427 /var/tmp/bperf.sock 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 111427 ']' 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:41:59.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:41:59.460 01:08:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:59.719 I/O size of 131072 is greater than zero copy threshold (65536). 00:41:59.719 Zero copy mechanism will not be used. 00:41:59.719 [2024-05-15 01:08:02.785691] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:41:59.719 [2024-05-15 01:08:02.785781] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111427 ] 00:41:59.719 [2024-05-15 01:08:02.921091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:59.978 [2024-05-15 01:08:03.017855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:00.545 01:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:00.545 01:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:42:00.545 01:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:42:00.545 01:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:42:00.545 01:08:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:01.113 01:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:01.113 01:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:01.113 nvme0n1 00:42:01.113 01:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:42:01.113 01:08:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:01.371 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:01.371 Zero copy mechanism will not be used. 00:42:01.371 Running I/O for 2 seconds... 00:42:03.273 00:42:03.273 Latency(us) 00:42:03.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:03.273 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:42:03.273 nvme0n1 : 2.00 7742.59 967.82 0.00 0.00 2062.28 644.19 6196.13 00:42:03.273 =================================================================================================================== 00:42:03.273 Total : 7742.59 967.82 0.00 0.00 2062.28 644.19 6196.13 00:42:03.273 0 00:42:03.273 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:42:03.273 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:42:03.273 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:42:03.273 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:42:03.273 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:42:03.273 | select(.opcode=="crc32c") 00:42:03.273 | "\(.module_name) \(.executed)"' 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111427 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 111427 ']' 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 111427 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 111427 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:03.532 killing process with pid 111427 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 111427' 00:42:03.532 Received shutdown signal, test time was about 2.000000 seconds 00:42:03.532 00:42:03.532 Latency(us) 00:42:03.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:03.532 =================================================================================================================== 00:42:03.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 111427 00:42:03.532 01:08:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 111427 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111512 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111512 /var/tmp/bperf.sock 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 111512 ']' 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:03.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:03.791 01:08:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:03.791 [2024-05-15 01:08:07.076413] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:42:03.791 [2024-05-15 01:08:07.076524] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111512 ] 00:42:04.050 [2024-05-15 01:08:07.218181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:04.050 [2024-05-15 01:08:07.314228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:05.008 01:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:05.008 01:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:42:05.008 01:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:42:05.008 01:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:42:05.008 01:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:05.266 01:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:05.266 01:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:05.525 nvme0n1 00:42:05.525 01:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:42:05.525 01:08:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:05.783 Running I/O for 2 seconds... 00:42:07.828 00:42:07.828 Latency(us) 00:42:07.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:07.828 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:07.828 nvme0n1 : 2.00 22152.07 86.53 0.00 0.00 5771.81 2442.71 9115.46 00:42:07.828 =================================================================================================================== 00:42:07.828 Total : 22152.07 86.53 0.00 0.00 5771.81 2442.71 9115.46 00:42:07.828 0 00:42:07.828 01:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:42:07.828 01:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:42:07.828 01:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:42:07.828 01:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:42:07.828 01:08:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:42:07.828 | select(.opcode=="crc32c") 00:42:07.828 | "\(.module_name) \(.executed)"' 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111512 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 111512 ']' 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 111512 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 111512 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:08.087 killing process with pid 111512 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 111512' 00:42:08.087 Received shutdown signal, test time was about 2.000000 seconds 00:42:08.087 00:42:08.087 Latency(us) 00:42:08.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:08.087 =================================================================================================================== 00:42:08.087 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 111512 00:42:08.087 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 111512 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111607 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111607 /var/tmp/bperf.sock 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 111607 ']' 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:08.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:08.347 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:08.347 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:08.347 Zero copy mechanism will not be used. 00:42:08.347 [2024-05-15 01:08:11.448195] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:42:08.347 [2024-05-15 01:08:11.448320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111607 ] 00:42:08.347 [2024-05-15 01:08:11.584859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:08.605 [2024-05-15 01:08:11.684437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:08.606 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:08.606 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:42:08.606 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:42:08.606 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:42:08.606 01:08:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:08.886 01:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:08.886 01:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:09.147 nvme0n1 00:42:09.147 01:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:42:09.147 01:08:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:09.405 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:09.405 Zero copy mechanism will not be used. 00:42:09.405 Running I/O for 2 seconds... 00:42:11.309 00:42:11.309 Latency(us) 00:42:11.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:11.309 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:42:11.309 nvme0n1 : 2.00 6250.32 781.29 0.00 0.00 2553.24 1921.40 12094.37 00:42:11.309 =================================================================================================================== 00:42:11.309 Total : 6250.32 781.29 0.00 0.00 2553.24 1921.40 12094.37 00:42:11.309 0 00:42:11.309 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:42:11.309 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:42:11.309 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:42:11.309 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:42:11.309 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:42:11.309 | select(.opcode=="crc32c") 00:42:11.309 | "\(.module_name) \(.executed)"' 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111607 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 111607 ']' 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 111607 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 111607 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:11.568 killing process with pid 111607 00:42:11.568 Received shutdown signal, test time was about 2.000000 seconds 00:42:11.568 00:42:11.568 Latency(us) 00:42:11.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:11.568 =================================================================================================================== 00:42:11.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 111607' 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 111607 00:42:11.568 01:08:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 111607 00:42:11.827 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 111292 00:42:11.827 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 111292 ']' 00:42:11.827 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 111292 00:42:11.827 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:42:11.827 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:11.827 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 111292 00:42:11.827 killing process with pid 111292 00:42:11.827 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:42:11.827 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:42:11.827 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 111292' 00:42:11.827 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 111292 00:42:11.827 [2024-05-15 01:08:15.064971] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:42:11.827 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 111292 00:42:12.088 ************************************ 00:42:12.088 END TEST nvmf_digest_clean 00:42:12.088 ************************************ 00:42:12.088 00:42:12.088 real 0m18.123s 00:42:12.088 user 0m34.541s 00:42:12.088 sys 0m4.554s 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:42:12.088 ************************************ 00:42:12.088 START TEST nvmf_digest_error 00:42:12.088 ************************************ 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # run_digest_error 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@721 -- # xtrace_disable 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:12.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=111701 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 111701 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 111701 ']' 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:12.088 01:08:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:12.347 [2024-05-15 01:08:15.377366] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:42:12.347 [2024-05-15 01:08:15.377464] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:12.347 [2024-05-15 01:08:15.516865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:12.347 [2024-05-15 01:08:15.605454] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:12.347 [2024-05-15 01:08:15.605505] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:12.347 [2024-05-15 01:08:15.605517] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:12.347 [2024-05-15 01:08:15.605525] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:12.347 [2024-05-15 01:08:15.605532] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:12.347 [2024-05-15 01:08:15.605574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@727 -- # xtrace_disable 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:13.283 [2024-05-15 01:08:16.386153] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.283 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:13.284 null0 00:42:13.284 [2024-05-15 01:08:16.498379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:13.284 [2024-05-15 01:08:16.522318] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:42:13.284 [2024-05-15 01:08:16.522561] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=111751 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 111751 /var/tmp/bperf.sock 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 111751 ']' 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:13.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:13.284 01:08:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:13.614 [2024-05-15 01:08:16.572216] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:42:13.614 [2024-05-15 01:08:16.572298] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111751 ] 00:42:13.614 [2024-05-15 01:08:16.711837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:13.614 [2024-05-15 01:08:16.812328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:14.549 01:08:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:14.550 01:08:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:42:14.550 01:08:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:14.550 01:08:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:14.550 01:08:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:42:14.550 01:08:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:14.550 01:08:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:14.550 01:08:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:14.550 01:08:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:14.550 01:08:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:15.118 nvme0n1 00:42:15.118 01:08:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:42:15.118 01:08:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.118 01:08:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:15.118 01:08:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.118 01:08:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:42:15.118 01:08:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:15.118 Running I/O for 2 seconds... 00:42:15.118 [2024-05-15 01:08:18.278304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.118 [2024-05-15 01:08:18.278361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.118 [2024-05-15 01:08:18.278379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.118 [2024-05-15 01:08:18.291385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.118 [2024-05-15 01:08:18.291427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.118 [2024-05-15 01:08:18.291442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.118 [2024-05-15 01:08:18.304033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.118 [2024-05-15 01:08:18.304071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.118 [2024-05-15 01:08:18.304102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.118 [2024-05-15 01:08:18.318859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.118 [2024-05-15 01:08:18.318895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.118 [2024-05-15 01:08:18.318925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.118 [2024-05-15 01:08:18.330014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.118 [2024-05-15 01:08:18.330050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.118 [2024-05-15 01:08:18.330080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.118 [2024-05-15 01:08:18.343408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.118 [2024-05-15 01:08:18.343462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.118 [2024-05-15 01:08:18.343492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.118 [2024-05-15 01:08:18.358553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.118 [2024-05-15 01:08:18.358591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.118 [2024-05-15 01:08:18.358667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.118 [2024-05-15 01:08:18.372600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.118 [2024-05-15 01:08:18.372665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.118 [2024-05-15 01:08:18.372681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.118 [2024-05-15 01:08:18.387129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.118 [2024-05-15 01:08:18.387170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.118 [2024-05-15 01:08:18.387183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.118 [2024-05-15 01:08:18.399539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.118 [2024-05-15 01:08:18.399581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.118 [2024-05-15 01:08:18.399612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.378 [2024-05-15 01:08:18.412874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.378 [2024-05-15 01:08:18.412913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.378 [2024-05-15 01:08:18.412944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.378 [2024-05-15 01:08:18.426933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.378 [2024-05-15 01:08:18.426985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.378 [2024-05-15 01:08:18.427001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.378 [2024-05-15 01:08:18.440626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.378 [2024-05-15 01:08:18.440676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.378 [2024-05-15 01:08:18.440714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.378 [2024-05-15 01:08:18.456239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.378 [2024-05-15 01:08:18.456277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.456291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.470055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.470093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.470124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.481731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.481769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.481783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.496921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.496959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.496990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.509436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.509474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.509504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.523973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.524012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.524026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.539289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.539330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.539344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.552853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.552890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.552919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.566524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.566562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.566592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.579337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.579376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.579391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.593123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.593161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.593191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.605857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.605902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.605915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.620136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.620174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.620204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.636304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.636350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.636381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.650257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.650296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.650327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.379 [2024-05-15 01:08:18.661856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.379 [2024-05-15 01:08:18.661893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.379 [2024-05-15 01:08:18.661923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.639 [2024-05-15 01:08:18.676145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.639 [2024-05-15 01:08:18.676184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.639 [2024-05-15 01:08:18.676198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.690109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.690146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.690176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.702603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.702668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.702698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.717433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.717471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.717501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.731176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.731215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.731229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.743340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.743380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.743394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.756853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.756892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.756906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.770519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.770559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.770573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.784368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.784407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.784422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.797815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.797852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.797882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.812122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.812161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.812191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.825760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.825798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.825812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.839363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.839403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.839417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.854349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.854389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.854404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.868508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.868547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.868578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.878707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.878745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.878759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.894940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.640 [2024-05-15 01:08:18.895003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.640 [2024-05-15 01:08:18.895017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.640 [2024-05-15 01:08:18.908145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.641 [2024-05-15 01:08:18.908182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.641 [2024-05-15 01:08:18.908211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.641 [2024-05-15 01:08:18.922346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.641 [2024-05-15 01:08:18.922407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.641 [2024-05-15 01:08:18.922437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.900 [2024-05-15 01:08:18.934455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.900 [2024-05-15 01:08:18.934491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.900 [2024-05-15 01:08:18.934521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.900 [2024-05-15 01:08:18.947935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.900 [2024-05-15 01:08:18.947971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.900 [2024-05-15 01:08:18.948001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.900 [2024-05-15 01:08:18.960872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.900 [2024-05-15 01:08:18.960907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.900 [2024-05-15 01:08:18.960936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.900 [2024-05-15 01:08:18.975379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.900 [2024-05-15 01:08:18.975434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.900 [2024-05-15 01:08:18.975464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.900 [2024-05-15 01:08:18.988586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.900 [2024-05-15 01:08:18.988648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.900 [2024-05-15 01:08:18.988680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.900 [2024-05-15 01:08:19.001859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.900 [2024-05-15 01:08:19.001896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.900 [2024-05-15 01:08:19.001927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.900 [2024-05-15 01:08:19.014184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.900 [2024-05-15 01:08:19.014240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.900 [2024-05-15 01:08:19.014253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.900 [2024-05-15 01:08:19.027944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.900 [2024-05-15 01:08:19.027999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.900 [2024-05-15 01:08:19.028029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.900 [2024-05-15 01:08:19.040683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.900 [2024-05-15 01:08:19.040733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.901 [2024-05-15 01:08:19.040763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.901 [2024-05-15 01:08:19.052932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.901 [2024-05-15 01:08:19.052969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.901 [2024-05-15 01:08:19.053000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.901 [2024-05-15 01:08:19.065781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.901 [2024-05-15 01:08:19.065815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.901 [2024-05-15 01:08:19.065845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.901 [2024-05-15 01:08:19.080021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.901 [2024-05-15 01:08:19.080058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.901 [2024-05-15 01:08:19.080089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.901 [2024-05-15 01:08:19.093857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.901 [2024-05-15 01:08:19.093893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.901 [2024-05-15 01:08:19.093922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.901 [2024-05-15 01:08:19.108600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.901 [2024-05-15 01:08:19.108682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.901 [2024-05-15 01:08:19.108698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.901 [2024-05-15 01:08:19.121715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.901 [2024-05-15 01:08:19.121751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.901 [2024-05-15 01:08:19.121781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.901 [2024-05-15 01:08:19.135710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.901 [2024-05-15 01:08:19.135762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.901 [2024-05-15 01:08:19.135792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.901 [2024-05-15 01:08:19.149105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.901 [2024-05-15 01:08:19.149140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.901 [2024-05-15 01:08:19.149170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.901 [2024-05-15 01:08:19.164758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.901 [2024-05-15 01:08:19.164795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.901 [2024-05-15 01:08:19.164825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:15.901 [2024-05-15 01:08:19.177194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:15.901 [2024-05-15 01:08:19.177232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:15.901 [2024-05-15 01:08:19.177263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.160 [2024-05-15 01:08:19.191363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.160 [2024-05-15 01:08:19.191401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.160 [2024-05-15 01:08:19.191435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.160 [2024-05-15 01:08:19.204242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.160 [2024-05-15 01:08:19.204281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.160 [2024-05-15 01:08:19.204312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.160 [2024-05-15 01:08:19.218803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.160 [2024-05-15 01:08:19.218846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.160 [2024-05-15 01:08:19.218860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.160 [2024-05-15 01:08:19.232856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.160 [2024-05-15 01:08:19.232900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.160 [2024-05-15 01:08:19.232914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.160 [2024-05-15 01:08:19.247122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.160 [2024-05-15 01:08:19.247164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.160 [2024-05-15 01:08:19.247179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.160 [2024-05-15 01:08:19.258923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.160 [2024-05-15 01:08:19.258968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.258983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.161 [2024-05-15 01:08:19.273149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.161 [2024-05-15 01:08:19.273185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.273199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.161 [2024-05-15 01:08:19.286419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.161 [2024-05-15 01:08:19.286459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.286473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.161 [2024-05-15 01:08:19.301551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.161 [2024-05-15 01:08:19.301591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.301622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.161 [2024-05-15 01:08:19.315424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.161 [2024-05-15 01:08:19.315466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.315480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.161 [2024-05-15 01:08:19.327834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.161 [2024-05-15 01:08:19.327873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.327887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.161 [2024-05-15 01:08:19.341464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.161 [2024-05-15 01:08:19.341503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.341517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.161 [2024-05-15 01:08:19.354461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.161 [2024-05-15 01:08:19.354500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.354531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.161 [2024-05-15 01:08:19.370425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.161 [2024-05-15 01:08:19.370466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.370480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.161 [2024-05-15 01:08:19.385395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.161 [2024-05-15 01:08:19.385440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.385455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.161 [2024-05-15 01:08:19.398136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.161 [2024-05-15 01:08:19.398175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.398191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.161 [2024-05-15 01:08:19.413541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.161 [2024-05-15 01:08:19.413593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.413624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.161 [2024-05-15 01:08:19.428281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.161 [2024-05-15 01:08:19.428320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.428334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.161 [2024-05-15 01:08:19.442543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.161 [2024-05-15 01:08:19.442583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.161 [2024-05-15 01:08:19.442612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.420 [2024-05-15 01:08:19.455649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.420 [2024-05-15 01:08:19.455690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.420 [2024-05-15 01:08:19.455705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.420 [2024-05-15 01:08:19.468968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.420 [2024-05-15 01:08:19.469013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.420 [2024-05-15 01:08:19.469028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.420 [2024-05-15 01:08:19.482426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.420 [2024-05-15 01:08:19.482465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.420 [2024-05-15 01:08:19.482480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.420 [2024-05-15 01:08:19.494243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.420 [2024-05-15 01:08:19.494283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.420 [2024-05-15 01:08:19.494297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.420 [2024-05-15 01:08:19.508977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.420 [2024-05-15 01:08:19.509015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.420 [2024-05-15 01:08:19.509045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.420 [2024-05-15 01:08:19.523360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.523401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.523415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.421 [2024-05-15 01:08:19.536758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.536817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.536832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.421 [2024-05-15 01:08:19.549013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.549070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.549100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.421 [2024-05-15 01:08:19.564540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.564580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.564624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.421 [2024-05-15 01:08:19.578971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.579017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.579031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.421 [2024-05-15 01:08:19.594363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.594412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.594443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.421 [2024-05-15 01:08:19.607583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.607628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.607643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.421 [2024-05-15 01:08:19.619605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.619686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.619700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.421 [2024-05-15 01:08:19.635430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.635470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.635485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.421 [2024-05-15 01:08:19.649738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.649776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.649806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.421 [2024-05-15 01:08:19.659867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.659904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.659934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.421 [2024-05-15 01:08:19.676641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.676722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.676753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.421 [2024-05-15 01:08:19.688373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.688411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.688442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.421 [2024-05-15 01:08:19.703141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.421 [2024-05-15 01:08:19.703195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.421 [2024-05-15 01:08:19.703210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.680 [2024-05-15 01:08:19.717475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.680 [2024-05-15 01:08:19.717513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.680 [2024-05-15 01:08:19.717544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.680 [2024-05-15 01:08:19.731317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.680 [2024-05-15 01:08:19.731356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.680 [2024-05-15 01:08:19.731386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.680 [2024-05-15 01:08:19.745923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.680 [2024-05-15 01:08:19.745964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.680 [2024-05-15 01:08:19.745978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.680 [2024-05-15 01:08:19.758563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.680 [2024-05-15 01:08:19.758647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.680 [2024-05-15 01:08:19.758663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.680 [2024-05-15 01:08:19.771881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.680 [2024-05-15 01:08:19.771952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.680 [2024-05-15 01:08:19.771967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.680 [2024-05-15 01:08:19.786185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.680 [2024-05-15 01:08:19.786275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.680 [2024-05-15 01:08:19.786292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.680 [2024-05-15 01:08:19.800962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.680 [2024-05-15 01:08:19.801002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.680 [2024-05-15 01:08:19.801033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.680 [2024-05-15 01:08:19.813573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.680 [2024-05-15 01:08:19.813655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.680 [2024-05-15 01:08:19.813670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.680 [2024-05-15 01:08:19.827227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.680 [2024-05-15 01:08:19.827267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.681 [2024-05-15 01:08:19.827281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.681 [2024-05-15 01:08:19.841790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.681 [2024-05-15 01:08:19.841830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.681 [2024-05-15 01:08:19.841845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.681 [2024-05-15 01:08:19.853417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.681 [2024-05-15 01:08:19.853457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.681 [2024-05-15 01:08:19.853472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.681 [2024-05-15 01:08:19.868453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.681 [2024-05-15 01:08:19.868493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.681 [2024-05-15 01:08:19.868508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.681 [2024-05-15 01:08:19.881148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.681 [2024-05-15 01:08:19.881187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.681 [2024-05-15 01:08:19.881202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.681 [2024-05-15 01:08:19.894580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.681 [2024-05-15 01:08:19.894653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.681 [2024-05-15 01:08:19.894668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.681 [2024-05-15 01:08:19.908212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.681 [2024-05-15 01:08:19.908278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.681 [2024-05-15 01:08:19.908311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.681 [2024-05-15 01:08:19.920536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.681 [2024-05-15 01:08:19.920586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.681 [2024-05-15 01:08:19.920630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.681 [2024-05-15 01:08:19.933802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.681 [2024-05-15 01:08:19.933841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.681 [2024-05-15 01:08:19.933872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.681 [2024-05-15 01:08:19.948535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.681 [2024-05-15 01:08:19.948576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.681 [2024-05-15 01:08:19.948591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.681 [2024-05-15 01:08:19.962736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.681 [2024-05-15 01:08:19.962774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.681 [2024-05-15 01:08:19.962788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.939 [2024-05-15 01:08:19.976378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.939 [2024-05-15 01:08:19.976416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.939 [2024-05-15 01:08:19.976447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.939 [2024-05-15 01:08:19.990328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.939 [2024-05-15 01:08:19.990368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.939 [2024-05-15 01:08:19.990382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.939 [2024-05-15 01:08:20.003179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.939 [2024-05-15 01:08:20.003218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.939 [2024-05-15 01:08:20.003233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.939 [2024-05-15 01:08:20.017518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.939 [2024-05-15 01:08:20.017575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.939 [2024-05-15 01:08:20.017590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.939 [2024-05-15 01:08:20.031170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.939 [2024-05-15 01:08:20.031227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.939 [2024-05-15 01:08:20.031242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.939 [2024-05-15 01:08:20.044126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.939 [2024-05-15 01:08:20.044170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.939 [2024-05-15 01:08:20.044202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.939 [2024-05-15 01:08:20.058693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.939 [2024-05-15 01:08:20.058734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.939 [2024-05-15 01:08:20.058749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.939 [2024-05-15 01:08:20.070584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.939 [2024-05-15 01:08:20.070633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.939 [2024-05-15 01:08:20.070647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.939 [2024-05-15 01:08:20.085448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.939 [2024-05-15 01:08:20.085490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.939 [2024-05-15 01:08:20.085505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.939 [2024-05-15 01:08:20.100160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.939 [2024-05-15 01:08:20.100239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.940 [2024-05-15 01:08:20.100260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.940 [2024-05-15 01:08:20.114876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.940 [2024-05-15 01:08:20.114918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.940 [2024-05-15 01:08:20.114932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.940 [2024-05-15 01:08:20.127818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.940 [2024-05-15 01:08:20.127856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.940 [2024-05-15 01:08:20.127870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.940 [2024-05-15 01:08:20.141647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.940 [2024-05-15 01:08:20.141701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.940 [2024-05-15 01:08:20.141715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.940 [2024-05-15 01:08:20.156410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.940 [2024-05-15 01:08:20.156449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.940 [2024-05-15 01:08:20.156479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.940 [2024-05-15 01:08:20.169158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.940 [2024-05-15 01:08:20.169199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.940 [2024-05-15 01:08:20.169230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.940 [2024-05-15 01:08:20.182491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.940 [2024-05-15 01:08:20.182530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.940 [2024-05-15 01:08:20.182560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.940 [2024-05-15 01:08:20.197172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.940 [2024-05-15 01:08:20.197211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.940 [2024-05-15 01:08:20.197242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.940 [2024-05-15 01:08:20.208983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.940 [2024-05-15 01:08:20.209022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.940 [2024-05-15 01:08:20.209036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:16.940 [2024-05-15 01:08:20.222411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:16.940 [2024-05-15 01:08:20.222450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:16.940 [2024-05-15 01:08:20.222464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:17.198 [2024-05-15 01:08:20.237336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:17.198 [2024-05-15 01:08:20.237402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:17.198 [2024-05-15 01:08:20.237418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:17.198 [2024-05-15 01:08:20.251714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf23d00) 00:42:17.198 [2024-05-15 01:08:20.251751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:17.198 [2024-05-15 01:08:20.251781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:17.198 00:42:17.198 Latency(us) 00:42:17.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:17.198 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:17.198 nvme0n1 : 2.01 18556.26 72.49 0.00 0.00 6889.95 3723.64 18230.92 00:42:17.198 =================================================================================================================== 00:42:17.199 Total : 18556.26 72.49 0.00 0.00 6889.95 3723.64 18230.92 00:42:17.199 0 00:42:17.199 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:42:17.199 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:42:17.199 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:42:17.199 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:42:17.199 | .driver_specific 00:42:17.199 | .nvme_error 00:42:17.199 | .status_code 00:42:17.199 | .command_transient_transport_error' 00:42:17.457 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:42:17.457 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 111751 00:42:17.457 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 111751 ']' 00:42:17.457 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 111751 00:42:17.457 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:42:17.457 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:17.457 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 111751 00:42:17.457 killing process with pid 111751 00:42:17.457 Received shutdown signal, test time was about 2.000000 seconds 00:42:17.457 00:42:17.457 Latency(us) 00:42:17.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:17.457 =================================================================================================================== 00:42:17.458 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:17.458 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:17.458 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:17.458 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 111751' 00:42:17.458 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 111751 00:42:17.458 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 111751 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=111836 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 111836 /var/tmp/bperf.sock 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 111836 ']' 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:17.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:17.717 01:08:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:17.717 [2024-05-15 01:08:20.821829] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:42:17.717 [2024-05-15 01:08:20.822109] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111836 ] 00:42:17.717 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:17.717 Zero copy mechanism will not be used. 00:42:17.717 [2024-05-15 01:08:20.964095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:17.975 [2024-05-15 01:08:21.047071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:18.543 01:08:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:18.543 01:08:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:42:18.543 01:08:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:18.543 01:08:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:19.110 01:08:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:42:19.110 01:08:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:19.110 01:08:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:19.110 01:08:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:19.110 01:08:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:19.110 01:08:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:19.369 nvme0n1 00:42:19.369 01:08:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:42:19.369 01:08:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:19.369 01:08:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:19.369 01:08:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:19.369 01:08:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:42:19.369 01:08:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:19.369 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:19.369 Zero copy mechanism will not be used. 00:42:19.369 Running I/O for 2 seconds... 00:42:19.369 [2024-05-15 01:08:22.596383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.369 [2024-05-15 01:08:22.596453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.369 [2024-05-15 01:08:22.596468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.369 [2024-05-15 01:08:22.601334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.369 [2024-05-15 01:08:22.601376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.369 [2024-05-15 01:08:22.601390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.369 [2024-05-15 01:08:22.605968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.369 [2024-05-15 01:08:22.606024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.369 [2024-05-15 01:08:22.606037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.369 [2024-05-15 01:08:22.609481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.369 [2024-05-15 01:08:22.609521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.369 [2024-05-15 01:08:22.609535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.369 [2024-05-15 01:08:22.613631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.369 [2024-05-15 01:08:22.613670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.369 [2024-05-15 01:08:22.613684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.369 [2024-05-15 01:08:22.617462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.369 [2024-05-15 01:08:22.617503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.369 [2024-05-15 01:08:22.617516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.369 [2024-05-15 01:08:22.621504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.369 [2024-05-15 01:08:22.621544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.369 [2024-05-15 01:08:22.621557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.369 [2024-05-15 01:08:22.625430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.369 [2024-05-15 01:08:22.625469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.369 [2024-05-15 01:08:22.625482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.369 [2024-05-15 01:08:22.629257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.369 [2024-05-15 01:08:22.629296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.369 [2024-05-15 01:08:22.629309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.369 [2024-05-15 01:08:22.633223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.369 [2024-05-15 01:08:22.633261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.370 [2024-05-15 01:08:22.633275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.370 [2024-05-15 01:08:22.637009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.370 [2024-05-15 01:08:22.637063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.370 [2024-05-15 01:08:22.637077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.370 [2024-05-15 01:08:22.640957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.370 [2024-05-15 01:08:22.640997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.370 [2024-05-15 01:08:22.641011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.370 [2024-05-15 01:08:22.645108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.370 [2024-05-15 01:08:22.645164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.370 [2024-05-15 01:08:22.645177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.370 [2024-05-15 01:08:22.649045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.370 [2024-05-15 01:08:22.649084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.370 [2024-05-15 01:08:22.649097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.370 [2024-05-15 01:08:22.653099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.370 [2024-05-15 01:08:22.653138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.370 [2024-05-15 01:08:22.653151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.657351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.657406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.657419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.661349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.661405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.661434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.665232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.665285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.665313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.668920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.668973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.669003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.673307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.673360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.673390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.676906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.676959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.676987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.680857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.680911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.680942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.684715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.684768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.684813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.688886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.688928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.688942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.692694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.692732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.692745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.696517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.696571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.696600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.700233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.700272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.700286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.704167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.704206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.704219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.707803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.707858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.707871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.711968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.712037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.712066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.715095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.715137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.715151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.719175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.719217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.719231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.723312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.723351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.723365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.726441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.629 [2024-05-15 01:08:22.726481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.629 [2024-05-15 01:08:22.726494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.629 [2024-05-15 01:08:22.729850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.729905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.729918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.733857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.733895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.733909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.738282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.738341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.738355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.741690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.741745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.741774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.746379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.746435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.746449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.749997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.750065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.750078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.753324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.753363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.753375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.756842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.756880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.756893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.760888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.760945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.760958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.765103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.765161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.765175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.769415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.769471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.769500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.773114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.773168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.773197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.776829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.776884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.776913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.780118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.780172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.780202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.784128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.784182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.784212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.788675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.788730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.788743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.793844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.793898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.793927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.798295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.798350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.798379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.801061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.801112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.801141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.805782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.805837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.805866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.809179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.809232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.809261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.813036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.813093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.813122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.817438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.817494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.817524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.821069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.630 [2024-05-15 01:08:22.821109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.630 [2024-05-15 01:08:22.821122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.630 [2024-05-15 01:08:22.825415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.825456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.825469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.829493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.829533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.829547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.832945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.832984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.832997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.836809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.836846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.836860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.840514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.840552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.840565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.843817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.843873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.843891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.847961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.848012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.848025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.851623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.851661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.851675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.855493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.855533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.855546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.859767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.859807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.859819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.863028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.863067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.863080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.867776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.867816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.867829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.870896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.870934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.870948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.874618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.874669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.874698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.879211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.879252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.879266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.883921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.883960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.883974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.887401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.887438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.887451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.891080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.891121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.891134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.894629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.894680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.894694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.899555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.899625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.899640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.903142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.903182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.903195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.907730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.907769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.907783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.631 [2024-05-15 01:08:22.911459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.631 [2024-05-15 01:08:22.911497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.631 [2024-05-15 01:08:22.911510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.891 [2024-05-15 01:08:22.915501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.891 [2024-05-15 01:08:22.915539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.891 [2024-05-15 01:08:22.915552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.891 [2024-05-15 01:08:22.919784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.891 [2024-05-15 01:08:22.919839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.891 [2024-05-15 01:08:22.919852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.891 [2024-05-15 01:08:22.923282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.891 [2024-05-15 01:08:22.923338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.891 [2024-05-15 01:08:22.923352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.891 [2024-05-15 01:08:22.927114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.891 [2024-05-15 01:08:22.927153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.891 [2024-05-15 01:08:22.927167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.891 [2024-05-15 01:08:22.931303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.891 [2024-05-15 01:08:22.931343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.891 [2024-05-15 01:08:22.931356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.891 [2024-05-15 01:08:22.935187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.891 [2024-05-15 01:08:22.935226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.891 [2024-05-15 01:08:22.935239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.891 [2024-05-15 01:08:22.938706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.938746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.938760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.943061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.943103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.943116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.947023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.947065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.947079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.951286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.951357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.951370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.955219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.955259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.955280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.958697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.958749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.958780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.962331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.962385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.962416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.966207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.966261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.966290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.970352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.970407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.970420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.973651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.973699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.973712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.978244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.978299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.978312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.982582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.982633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.982648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.986218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.986258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.986271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.989630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.989669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.989682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.993321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.993360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.993373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:22.997351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:22.997393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:22.997406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:23.001534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:23.001574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:23.001587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:23.005415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:23.005454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:23.005468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:23.009590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:23.009643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:23.009656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:23.012959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:23.012998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:23.013011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:23.017136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:23.017175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:23.017188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:23.021634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:23.021672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:23.021686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:23.026064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:23.026104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:23.026117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:23.030123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:23.030160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.892 [2024-05-15 01:08:23.030173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.892 [2024-05-15 01:08:23.032834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.892 [2024-05-15 01:08:23.032871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.032884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.037407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.037447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.037460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.041510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.041548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.041562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.044716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.044754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.044767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.048912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.048951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.048965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.053395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.053435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.053448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.056434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.056473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.056485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.060432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.060471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.060484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.064907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.064946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.064958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.069884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.069924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.069938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.074652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.074693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.074707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.077312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.077350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.077362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.081447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.081487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.081501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.086033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.086072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.086085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.089715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.089754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.089767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.093251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.093290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.093303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.097097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.097136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.097148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.100340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.100378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.100392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.104350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.104389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.104402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.108561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.108610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.108624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.112619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.112657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.112670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.115571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.115621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.115634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.120259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.120300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.120313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.124710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.124751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.893 [2024-05-15 01:08:23.124764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.893 [2024-05-15 01:08:23.127864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.893 [2024-05-15 01:08:23.127903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.894 [2024-05-15 01:08:23.127916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.894 [2024-05-15 01:08:23.132788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.894 [2024-05-15 01:08:23.132828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.894 [2024-05-15 01:08:23.132841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.894 [2024-05-15 01:08:23.137141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.894 [2024-05-15 01:08:23.137180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.894 [2024-05-15 01:08:23.137193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.894 [2024-05-15 01:08:23.139872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.894 [2024-05-15 01:08:23.139909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.894 [2024-05-15 01:08:23.139922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.894 [2024-05-15 01:08:23.144901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.894 [2024-05-15 01:08:23.144941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.894 [2024-05-15 01:08:23.144954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.894 [2024-05-15 01:08:23.149387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.894 [2024-05-15 01:08:23.149427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.894 [2024-05-15 01:08:23.149440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.894 [2024-05-15 01:08:23.153576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.894 [2024-05-15 01:08:23.153641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.894 [2024-05-15 01:08:23.153654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.894 [2024-05-15 01:08:23.156782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.894 [2024-05-15 01:08:23.156835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.894 [2024-05-15 01:08:23.156848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.894 [2024-05-15 01:08:23.161419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.894 [2024-05-15 01:08:23.161458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.894 [2024-05-15 01:08:23.161471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:19.894 [2024-05-15 01:08:23.165171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.894 [2024-05-15 01:08:23.165211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.894 [2024-05-15 01:08:23.165224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:19.894 [2024-05-15 01:08:23.169353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.894 [2024-05-15 01:08:23.169396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.894 [2024-05-15 01:08:23.169409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:19.894 [2024-05-15 01:08:23.173487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.894 [2024-05-15 01:08:23.173527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.894 [2024-05-15 01:08:23.173540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:19.894 [2024-05-15 01:08:23.176862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:19.894 [2024-05-15 01:08:23.176902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:19.894 [2024-05-15 01:08:23.176916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.181446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.181484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.181498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.185851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.185891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.185904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.188634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.188670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.188683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.193670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.193717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.193730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.197860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.197900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.197913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.201042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.201081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.201095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.206057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.206097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.206110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.209477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.209527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.209541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.213702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.213740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.213753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.218241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.218279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.218293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.222356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.222395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.222408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.225581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.225631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.225644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.230109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.230149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.230161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.234499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.234539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.234553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.237190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.237227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.237240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.242228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.242268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.155 [2024-05-15 01:08:23.242281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.155 [2024-05-15 01:08:23.246411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.155 [2024-05-15 01:08:23.246465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.246479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.249970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.250024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.250037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.253638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.253692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.253705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.256984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.257039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.257052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.260846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.260884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.260898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.264864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.264919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.264932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.268376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.268415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.268428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.272765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.272804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.272817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.277500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.277539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.277553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.281917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.281970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.281984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.284626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.284680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.284693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.289151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.289207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.289220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.292669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.292705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.292718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.296299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.296338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.296351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.300114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.300169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.300182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.304089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.304144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.304157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.308383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.308421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.308434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.312197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.312236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.312249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.316169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.316223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.316237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.320302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.320341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.320354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.324238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.324277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.324290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.327977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.328019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.328032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.331963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.332007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.332020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.337045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.337086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.337099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.156 [2024-05-15 01:08:23.340508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.156 [2024-05-15 01:08:23.340545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.156 [2024-05-15 01:08:23.340558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.344205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.344243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.344256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.348326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.348365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.348378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.353146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.353186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.353199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.356307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.356343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.356356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.360790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.360829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.360842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.364943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.364983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.364996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.367935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.367973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.367986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.372333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.372372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.372385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.377023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.377066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.377080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.381855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.381896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.381910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.384511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.384549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.384562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.389571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.389623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.389647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.392959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.392998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.393011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.397222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.397260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.397273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.401609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.401645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.401657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.405536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.405573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.405587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.408976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.409015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.409028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.412815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.412854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.412867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.416146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.416186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.416200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.419856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.419894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.419908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.423995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.424034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.424047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.428248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.428287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.428300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.431624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.431661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.157 [2024-05-15 01:08:23.431675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.157 [2024-05-15 01:08:23.435437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.157 [2024-05-15 01:08:23.435476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.158 [2024-05-15 01:08:23.435489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.158 [2024-05-15 01:08:23.438874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.158 [2024-05-15 01:08:23.438914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.158 [2024-05-15 01:08:23.438927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.442622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.442660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.442672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.446446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.446485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.446498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.450722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.450760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.450773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.453947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.453984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.453998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.457891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.457929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.457942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.462182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.462220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.462233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.467218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.467258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.467271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.471336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.471375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.471387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.474584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.474634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.474647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.479096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.479136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.479149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.482409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.482460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.482472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.486852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.486891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.486904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.490634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.490688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.490701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.495146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.495186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.495200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.500012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.500067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.500081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.503291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.503330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.503343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.508046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.508103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.508117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.513074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.513129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.513142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.517515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.517554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.517568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.520833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.520889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.520902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.524709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.524749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.524762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.528715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.528753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.528767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.533097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.533151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.533164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.537210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.537264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.537277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.541255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.541294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.541307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.544794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.544833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.544847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.548249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.548286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.548299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.417 [2024-05-15 01:08:23.552672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.417 [2024-05-15 01:08:23.552709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.417 [2024-05-15 01:08:23.552723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.555989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.556023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.556036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.560068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.560106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.560118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.564335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.564390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.564403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.567527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.567564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.567577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.571659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.571695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.571708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.576484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.576539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.576552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.579812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.579850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.579864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.584097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.584151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.584181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.588727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.588780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.588810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.593027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.593081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.593111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.596289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.596341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.596370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.600790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.600829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.600843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.604747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.604802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.604815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.609086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.609141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.609170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.612456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.612495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.612509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.616135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.616187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.616216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.621102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.621156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.621186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.624713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.624767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.624780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.628906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.628962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.628975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.633902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.633957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.633970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.637270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.637310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.637323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.641196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.641235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.641249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.645172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.645227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.645257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.649050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.649089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.649103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.653410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.653449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.653462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.657682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.657739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.657753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.661057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.661111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.661125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.665628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.665667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.665680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.669505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.669544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.669558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.673198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.673237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.673250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.677290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.677330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.677344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.681739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.681796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.681825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.685177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.685231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.685261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.689372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.689428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.689442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.694269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.694310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.694324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.418 [2024-05-15 01:08:23.698987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.418 [2024-05-15 01:08:23.699028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.418 [2024-05-15 01:08:23.699042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.703735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.703774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.703787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.706791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.706827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.706840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.710642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.710680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.710693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.714840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.714877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.714890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.719282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.719330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.719343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.722714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.722752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.722766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.726982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.727021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.727034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.731214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.731253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.731266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.734354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.734392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.734405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.738212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.738250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.738263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.741435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.741475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.741487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.745859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.745898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.745911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.750537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.750577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.750590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.754588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.754664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.754679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.757596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.757674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.757687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.762207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.762263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.762276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.766559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.766621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.766635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.771552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.771591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.771617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.774508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.774559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.774588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.778570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.778636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.778665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.783684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.783723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.679 [2024-05-15 01:08:23.783736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.679 [2024-05-15 01:08:23.788597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.679 [2024-05-15 01:08:23.788661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.788674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.791598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.791644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.791658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.795568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.795617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.795631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.799906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.799960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.799973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.802877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.802931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.802971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.807139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.807179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.807192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.812130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.812188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.812218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.815246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.815286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.815299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.819356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.819397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.819411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.823220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.823260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.823273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.827401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.827441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.827454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.831607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.831655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.831669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.835202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.835242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.835255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.838193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.838248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.838277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.841973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.842011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.842024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.846184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.846224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.846237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.849887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.849926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.849939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.853739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.853792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.853821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.858461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.858500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.858513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.861869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.861923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.861952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.865778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.865831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.865860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.869847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.869900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.869928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.873337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.873376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.873390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.876693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.680 [2024-05-15 01:08:23.876746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.680 [2024-05-15 01:08:23.876759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.680 [2024-05-15 01:08:23.880845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.880884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.880897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.885575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.885635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.885649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.888841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.888877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.888889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.892753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.892806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.892834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.896981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.897035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.897064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.901437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.901492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.901520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.904270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.904307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.904321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.909100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.909137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.909166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.912520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.912560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.912574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.916641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.916694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.916723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.920783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.920820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.920833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.924081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.924119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.924132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.927917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.927955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.927968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.931581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.931630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.931644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.935440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.935480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.935493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.939642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.939683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.939697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.943664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.943735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.943749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.947444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.947497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.947525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.951713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.951767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.951796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.955229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.955269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.955282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.681 [2024-05-15 01:08:23.959748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.681 [2024-05-15 01:08:23.959788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.681 [2024-05-15 01:08:23.959801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:23.964074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:23.964113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:23.964126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:23.967370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:23.967408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:23.967420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:23.971853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:23.971892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:23.971906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:23.976206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:23.976245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:23.976259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:23.979419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:23.979458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:23.979471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:23.983185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:23.983225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:23.983238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:23.986855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:23.986894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:23.986906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:23.991277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:23.991316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:23.991329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:23.994260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:23.994298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:23.994311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:23.998704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:23.998757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:23.998770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:24.002164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:24.002202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:24.002214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:24.006122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:24.006161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:24.006174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:24.010790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:24.010830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:24.010843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:24.014043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:24.014081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:24.014094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:24.018146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:24.018186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:24.018199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:24.022074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:24.022114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:24.022127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:24.025757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:24.025794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.942 [2024-05-15 01:08:24.025824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.942 [2024-05-15 01:08:24.030009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.942 [2024-05-15 01:08:24.030049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.030078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.035277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.035347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.035361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.038660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.038695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.038723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.042782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.042820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.042849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.048000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.048052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.048080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.052047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.052084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.052112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.055428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.055466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.055479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.059239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.059278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.059291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.063128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.063168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.063181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.067207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.067247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.067259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.071052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.071090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.071103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.074499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.074535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.074564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.078333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.078370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.078399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.082539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.082576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.082605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.086062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.086100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.086128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.090759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.090795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.090823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.094172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.094209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.094237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.098048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.098084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.098113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.101643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.101707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.101736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.105567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.105618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.105648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.109450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.109488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.109516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.113253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.113290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.113319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.116849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.116887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.116916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.120944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.120982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.120996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.124648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.124686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.124699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.128497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.128535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.128564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.131237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.131276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.131289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.135506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.135545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.135558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.139803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.139842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.139855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.143369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.143410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.143423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.147843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.147894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.147908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.152574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.152650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.152665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.156942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.156986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.157000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.160761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.160797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.160811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.164099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.164138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.164152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.168883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.168924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.168937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.174041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.174081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.174094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.177164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.177203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.177216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.181262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.181302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.181315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.184748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.184785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.184798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.188211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.188251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.188264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.192869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.192908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.192920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.196281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.196319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.196332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.200115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.200156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.200170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.204208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.204250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.204264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.208361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.208407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.943 [2024-05-15 01:08:24.208422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:20.943 [2024-05-15 01:08:24.211723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.943 [2024-05-15 01:08:24.211769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.944 [2024-05-15 01:08:24.211782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:20.944 [2024-05-15 01:08:24.216167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.944 [2024-05-15 01:08:24.216227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.944 [2024-05-15 01:08:24.216241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:20.944 [2024-05-15 01:08:24.220846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.944 [2024-05-15 01:08:24.220913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.944 [2024-05-15 01:08:24.220927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:20.944 [2024-05-15 01:08:24.225302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:20.944 [2024-05-15 01:08:24.225347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:20.944 [2024-05-15 01:08:24.225360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.203 [2024-05-15 01:08:24.228368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.203 [2024-05-15 01:08:24.228412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.203 [2024-05-15 01:08:24.228425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.203 [2024-05-15 01:08:24.232824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.203 [2024-05-15 01:08:24.232862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.203 [2024-05-15 01:08:24.232876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.203 [2024-05-15 01:08:24.236888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.203 [2024-05-15 01:08:24.236926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.203 [2024-05-15 01:08:24.236939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.203 [2024-05-15 01:08:24.240257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.203 [2024-05-15 01:08:24.240295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.203 [2024-05-15 01:08:24.240308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.203 [2024-05-15 01:08:24.244695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.203 [2024-05-15 01:08:24.244734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.203 [2024-05-15 01:08:24.244747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.203 [2024-05-15 01:08:24.249102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.203 [2024-05-15 01:08:24.249156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.203 [2024-05-15 01:08:24.249169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.203 [2024-05-15 01:08:24.252599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.203 [2024-05-15 01:08:24.252662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.203 [2024-05-15 01:08:24.252676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.203 [2024-05-15 01:08:24.256884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.203 [2024-05-15 01:08:24.256921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.203 [2024-05-15 01:08:24.256934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.203 [2024-05-15 01:08:24.261186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.203 [2024-05-15 01:08:24.261240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.203 [2024-05-15 01:08:24.261253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.203 [2024-05-15 01:08:24.265452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.203 [2024-05-15 01:08:24.265507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.203 [2024-05-15 01:08:24.265520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.203 [2024-05-15 01:08:24.268706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.203 [2024-05-15 01:08:24.268757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.203 [2024-05-15 01:08:24.268770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.203 [2024-05-15 01:08:24.272681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.203 [2024-05-15 01:08:24.272734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.203 [2024-05-15 01:08:24.272747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.203 [2024-05-15 01:08:24.276799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.203 [2024-05-15 01:08:24.276854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.276867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.280315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.280369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.280382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.284507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.284547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.284560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.288030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.288085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.288099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.292326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.292382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.292395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.296413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.296467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.296481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.300620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.300672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.300685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.303929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.303974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.303987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.308750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.308805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.308834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.312309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.312363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.312392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.316775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.316836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.316849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.320649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.320731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.320760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.324593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.324661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.324691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.328449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.328505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.328534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.332999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.333056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.333086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.337187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.337246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.337275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.341340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.341399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.341430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.346251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.346292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.346306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.349938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.349991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.350036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.354047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.354087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.354100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.358777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.358846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.358859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.364268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.364321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.364349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.367446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.367481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.367493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.371483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.371522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.204 [2024-05-15 01:08:24.371535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.204 [2024-05-15 01:08:24.375448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.204 [2024-05-15 01:08:24.375502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.375544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.379212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.379252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.379265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.383035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.383075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.383088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.386378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.386431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.386460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.389887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.389937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.389966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.393255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.393307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.393335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.396544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.396610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.396625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.400273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.400326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.400354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.404542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.404625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.404639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.407925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.407963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.407975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.411545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.411608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.411622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.415568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.415619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.415633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.419418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.419457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.419470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.423418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.423456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.423468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.427144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.427183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.427196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.431503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.431541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.431555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.436008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.436062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.436091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.438765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.438816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.438844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.443214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.443253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.443266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.447140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.447178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.447191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.450601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.450663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.450677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.453943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.453995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.454024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.457590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.457654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.457683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.461455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.461507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.205 [2024-05-15 01:08:24.461536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.205 [2024-05-15 01:08:24.465170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.205 [2024-05-15 01:08:24.465224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.206 [2024-05-15 01:08:24.465253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.206 [2024-05-15 01:08:24.469300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.206 [2024-05-15 01:08:24.469353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.206 [2024-05-15 01:08:24.469382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.206 [2024-05-15 01:08:24.473423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.206 [2024-05-15 01:08:24.473462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.206 [2024-05-15 01:08:24.473475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.206 [2024-05-15 01:08:24.477359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.206 [2024-05-15 01:08:24.477400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.206 [2024-05-15 01:08:24.477413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.206 [2024-05-15 01:08:24.481379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.206 [2024-05-15 01:08:24.481434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.206 [2024-05-15 01:08:24.481446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.206 [2024-05-15 01:08:24.485687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.206 [2024-05-15 01:08:24.485726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.206 [2024-05-15 01:08:24.485739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.206 [2024-05-15 01:08:24.488890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.206 [2024-05-15 01:08:24.488926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.206 [2024-05-15 01:08:24.488939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.465 [2024-05-15 01:08:24.493251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.465 [2024-05-15 01:08:24.493288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.465 [2024-05-15 01:08:24.493301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.465 [2024-05-15 01:08:24.497753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.465 [2024-05-15 01:08:24.497792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.465 [2024-05-15 01:08:24.497805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.465 [2024-05-15 01:08:24.500654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.465 [2024-05-15 01:08:24.500688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.465 [2024-05-15 01:08:24.500700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.465 [2024-05-15 01:08:24.504969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.465 [2024-05-15 01:08:24.505008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.465 [2024-05-15 01:08:24.505022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.465 [2024-05-15 01:08:24.508978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.465 [2024-05-15 01:08:24.509017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.465 [2024-05-15 01:08:24.509030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.465 [2024-05-15 01:08:24.512229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.512283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.512296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.516703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.516740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.516754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.521802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.521856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.521869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.525025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.525079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.525093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.528848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.528902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.528915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.532655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.532693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.532706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.536850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.536903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.536916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.540516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.540554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.540567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.544683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.544735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.544748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.548994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.549033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.549046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.552091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.552146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.552159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.555892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.555929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.555942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.559437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.559474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.559487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.563661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.563700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.563714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.566815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.566852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.566865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.570299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.570343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.570357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.574124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.574170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.574183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.578639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.578675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.578688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:21.466 [2024-05-15 01:08:24.582738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa35890) 00:42:21.466 [2024-05-15 01:08:24.582777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:21.466 [2024-05-15 01:08:24.582790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:21.466 00:42:21.466 Latency(us) 00:42:21.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:21.466 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:42:21.466 nvme0n1 : 2.00 7785.07 973.13 0.00 0.00 2050.98 647.91 11319.85 00:42:21.466 =================================================================================================================== 00:42:21.466 Total : 7785.07 973.13 0.00 0.00 2050.98 647.91 11319.85 00:42:21.466 0 00:42:21.466 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:42:21.466 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:42:21.466 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:42:21.466 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:42:21.466 | .driver_specific 00:42:21.466 | .nvme_error 00:42:21.466 | .status_code 00:42:21.466 | .command_transient_transport_error' 00:42:21.725 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 502 > 0 )) 00:42:21.725 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 111836 00:42:21.725 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 111836 ']' 00:42:21.725 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 111836 00:42:21.725 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:42:21.725 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:21.725 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 111836 00:42:21.725 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:21.725 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:21.725 killing process with pid 111836 00:42:21.725 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 111836' 00:42:21.725 Received shutdown signal, test time was about 2.000000 seconds 00:42:21.725 00:42:21.725 Latency(us) 00:42:21.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:21.725 =================================================================================================================== 00:42:21.725 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:21.725 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 111836 00:42:21.725 01:08:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 111836 00:42:21.983 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:42:21.983 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:42:21.983 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:42:21.983 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:42:21.983 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:42:21.983 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:42:21.983 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=111923 00:42:21.983 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 111923 /var/tmp/bperf.sock 00:42:21.983 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 111923 ']' 00:42:21.983 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:21.983 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:21.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:21.984 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:21.984 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:21.984 01:08:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:21.984 [2024-05-15 01:08:25.153374] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:42:21.984 [2024-05-15 01:08:25.153456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111923 ] 00:42:22.242 [2024-05-15 01:08:25.286119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:22.242 [2024-05-15 01:08:25.374059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:23.175 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:23.175 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:42:23.176 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:23.176 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:23.176 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:42:23.176 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.176 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:23.176 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.176 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:23.176 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:23.434 nvme0n1 00:42:23.434 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:42:23.434 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.434 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:23.692 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.692 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:42:23.692 01:08:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:23.692 Running I/O for 2 seconds... 00:42:23.692 [2024-05-15 01:08:26.871817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f6458 00:42:23.692 [2024-05-15 01:08:26.873427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:23.692 [2024-05-15 01:08:26.873554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:23.692 [2024-05-15 01:08:26.882966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f20d8 00:42:23.692 [2024-05-15 01:08:26.884293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:23.692 [2024-05-15 01:08:26.884367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:42:23.692 [2024-05-15 01:08:26.895972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e8d30 00:42:23.692 [2024-05-15 01:08:26.898241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:23.692 [2024-05-15 01:08:26.898306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:42:23.692 [2024-05-15 01:08:26.909334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ef270 00:42:23.692 [2024-05-15 01:08:26.910395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:23.692 [2024-05-15 01:08:26.910477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:23.692 [2024-05-15 01:08:26.919965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e7818 00:42:23.692 [2024-05-15 01:08:26.920621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:23.692 [2024-05-15 01:08:26.920701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:42:23.692 [2024-05-15 01:08:26.934539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f5be8 00:42:23.692 [2024-05-15 01:08:26.935945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:23.692 [2024-05-15 01:08:26.936014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:42:23.692 [2024-05-15 01:08:26.944438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190eb328 00:42:23.692 [2024-05-15 01:08:26.945417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:23.692 [2024-05-15 01:08:26.945480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:42:23.692 [2024-05-15 01:08:26.957566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e3060 00:42:23.692 [2024-05-15 01:08:26.958869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:23.692 [2024-05-15 01:08:26.958947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:42:23.692 [2024-05-15 01:08:26.969420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ed4e8 00:42:23.693 [2024-05-15 01:08:26.970512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:23.693 [2024-05-15 01:08:26.970561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:42:24.003 [2024-05-15 01:08:26.980757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f9f68 00:42:24.003 [2024-05-15 01:08:26.981600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.003 [2024-05-15 01:08:26.981653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:42:24.003 [2024-05-15 01:08:26.994084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f92c0 00:42:24.003 [2024-05-15 01:08:26.995695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.003 [2024-05-15 01:08:26.995737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:42:24.003 [2024-05-15 01:08:27.005324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190eb760 00:42:24.003 [2024-05-15 01:08:27.006713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.003 [2024-05-15 01:08:27.006752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:42:24.003 [2024-05-15 01:08:27.016668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e99d8 00:42:24.003 [2024-05-15 01:08:27.018196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.003 [2024-05-15 01:08:27.018235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:24.003 [2024-05-15 01:08:27.028474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e7818 00:42:24.003 [2024-05-15 01:08:27.029758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.003 [2024-05-15 01:08:27.029796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:42:24.003 [2024-05-15 01:08:27.042899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e3d08 00:42:24.003 [2024-05-15 01:08:27.044839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.003 [2024-05-15 01:08:27.044878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:42:24.003 [2024-05-15 01:08:27.051488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e88f8 00:42:24.003 [2024-05-15 01:08:27.052480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.003 [2024-05-15 01:08:27.052518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:42:24.003 [2024-05-15 01:08:27.063464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f8a50 00:42:24.003 [2024-05-15 01:08:27.064463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.003 [2024-05-15 01:08:27.064502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:42:24.003 [2024-05-15 01:08:27.076933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fcdd0 00:42:24.004 [2024-05-15 01:08:27.078462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.078500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.088159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f0788 00:42:24.004 [2024-05-15 01:08:27.089687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.089726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.099969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f31b8 00:42:24.004 [2024-05-15 01:08:27.101143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.101182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.112153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e7c50 00:42:24.004 [2024-05-15 01:08:27.112904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.112942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.123533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ddc00 00:42:24.004 [2024-05-15 01:08:27.124183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.124217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.135665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fbcf0 00:42:24.004 [2024-05-15 01:08:27.136722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.136761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.147683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fb480 00:42:24.004 [2024-05-15 01:08:27.148267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.148305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.162442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f7538 00:42:24.004 [2024-05-15 01:08:27.164381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.164420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.170877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e99d8 00:42:24.004 [2024-05-15 01:08:27.171670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.171709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.186070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e5a90 00:42:24.004 [2024-05-15 01:08:27.187844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.187888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.194731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e5658 00:42:24.004 [2024-05-15 01:08:27.195654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.195694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.208658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ea248 00:42:24.004 [2024-05-15 01:08:27.209914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.209955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.220832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e5220 00:42:24.004 [2024-05-15 01:08:27.222214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.222254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.232515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190eb328 00:42:24.004 [2024-05-15 01:08:27.233842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.233892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.244683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f6cc8 00:42:24.004 [2024-05-15 01:08:27.245952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.245988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.256270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fa3a0 00:42:24.004 [2024-05-15 01:08:27.257688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.257737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.268368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fc998 00:42:24.004 [2024-05-15 01:08:27.269517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.269553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:42:24.004 [2024-05-15 01:08:27.280572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e49b0 00:42:24.004 [2024-05-15 01:08:27.281333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.004 [2024-05-15 01:08:27.281372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.292482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f5378 00:42:24.264 [2024-05-15 01:08:27.293657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.293694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.307224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fb480 00:42:24.264 [2024-05-15 01:08:27.309044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.309080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.315892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fa3a0 00:42:24.264 [2024-05-15 01:08:27.316788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.316827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.330471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e7c50 00:42:24.264 [2024-05-15 01:08:27.332058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.332107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.341686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f7da8 00:42:24.264 [2024-05-15 01:08:27.343179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.343218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.353474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f2948 00:42:24.264 [2024-05-15 01:08:27.354781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.354818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.368141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ec840 00:42:24.264 [2024-05-15 01:08:27.370100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.370138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.376547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190df118 00:42:24.264 [2024-05-15 01:08:27.377352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.377391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.391475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f0ff8 00:42:24.264 [2024-05-15 01:08:27.393269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.393319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.400318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e38d0 00:42:24.264 [2024-05-15 01:08:27.401262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.401297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.414960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f7538 00:42:24.264 [2024-05-15 01:08:27.416623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.416691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.425778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190de8a8 00:42:24.264 [2024-05-15 01:08:27.427082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.427124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.439013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f0ff8 00:42:24.264 [2024-05-15 01:08:27.440084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.440145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.450827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e0630 00:42:24.264 [2024-05-15 01:08:27.451910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.451957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.462218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ddc00 00:42:24.264 [2024-05-15 01:08:27.462962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.463024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.472951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fe720 00:42:24.264 [2024-05-15 01:08:27.473857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.473914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:42:24.264 [2024-05-15 01:08:27.486276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f3e60 00:42:24.264 [2024-05-15 01:08:27.486995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.264 [2024-05-15 01:08:27.487033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:42:24.265 [2024-05-15 01:08:27.501285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f6458 00:42:24.265 [2024-05-15 01:08:27.503157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.265 [2024-05-15 01:08:27.503201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:42:24.265 [2024-05-15 01:08:27.514066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190eb328 00:42:24.265 [2024-05-15 01:08:27.515844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.265 [2024-05-15 01:08:27.515884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:24.265 [2024-05-15 01:08:27.522937] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190eea00 00:42:24.265 [2024-05-15 01:08:27.523945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.265 [2024-05-15 01:08:27.523983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:24.265 [2024-05-15 01:08:27.538223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f20d8 00:42:24.265 [2024-05-15 01:08:27.539910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.265 [2024-05-15 01:08:27.539949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:42:24.265 [2024-05-15 01:08:27.547078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e0ea0 00:42:24.265 [2024-05-15 01:08:27.547815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.265 [2024-05-15 01:08:27.547851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:42:24.524 [2024-05-15 01:08:27.561889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e6b70 00:42:24.524 [2024-05-15 01:08:27.563119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.524 [2024-05-15 01:08:27.563158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:42:24.524 [2024-05-15 01:08:27.573828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f3a28 00:42:24.524 [2024-05-15 01:08:27.574862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.524 [2024-05-15 01:08:27.574900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:42:24.524 [2024-05-15 01:08:27.585524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190de038 00:42:24.524 [2024-05-15 01:08:27.586464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.524 [2024-05-15 01:08:27.586512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:42:24.524 [2024-05-15 01:08:27.600516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e1b48 00:42:24.524 [2024-05-15 01:08:27.602190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.524 [2024-05-15 01:08:27.602228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:42:24.524 [2024-05-15 01:08:27.609121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f9b30 00:42:24.524 [2024-05-15 01:08:27.609879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.524 [2024-05-15 01:08:27.609916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:42:24.524 [2024-05-15 01:08:27.621259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f6890 00:42:24.524 [2024-05-15 01:08:27.622049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.524 [2024-05-15 01:08:27.622087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:42:24.524 [2024-05-15 01:08:27.635214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f2d80 00:42:24.524 [2024-05-15 01:08:27.636425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.524 [2024-05-15 01:08:27.636464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:42:24.524 [2024-05-15 01:08:27.647426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f8618 00:42:24.524 [2024-05-15 01:08:27.648985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.524 [2024-05-15 01:08:27.649025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:42:24.524 [2024-05-15 01:08:27.659390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f8a50 00:42:24.524 [2024-05-15 01:08:27.660943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.524 [2024-05-15 01:08:27.660982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:42:24.524 [2024-05-15 01:08:27.668168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ef270 00:42:24.524 [2024-05-15 01:08:27.668952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.524 [2024-05-15 01:08:27.668985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:42:24.524 [2024-05-15 01:08:27.682811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f7970 00:42:24.524 [2024-05-15 01:08:27.684262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.524 [2024-05-15 01:08:27.684301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:42:24.524 [2024-05-15 01:08:27.694855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f6458 00:42:24.524 [2024-05-15 01:08:27.695897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.525 [2024-05-15 01:08:27.695950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:42:24.525 [2024-05-15 01:08:27.706236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190de038 00:42:24.525 [2024-05-15 01:08:27.707084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.525 [2024-05-15 01:08:27.707123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:42:24.525 [2024-05-15 01:08:27.717422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e1710 00:42:24.525 [2024-05-15 01:08:27.718249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.525 [2024-05-15 01:08:27.718288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:42:24.525 [2024-05-15 01:08:27.730583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fcdd0 00:42:24.525 [2024-05-15 01:08:27.732044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.525 [2024-05-15 01:08:27.732084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:42:24.525 [2024-05-15 01:08:27.741254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e0a68 00:42:24.525 [2024-05-15 01:08:27.742712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.525 [2024-05-15 01:08:27.742749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:42:24.525 [2024-05-15 01:08:27.752977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f5be8 00:42:24.525 [2024-05-15 01:08:27.753986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.525 [2024-05-15 01:08:27.754024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:42:24.525 [2024-05-15 01:08:27.764198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190eb760 00:42:24.525 [2024-05-15 01:08:27.765015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.525 [2024-05-15 01:08:27.765053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:42:24.525 [2024-05-15 01:08:27.779173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f81e0 00:42:24.525 [2024-05-15 01:08:27.780985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.525 [2024-05-15 01:08:27.781041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:42:24.525 [2024-05-15 01:08:27.790519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e4de8 00:42:24.525 [2024-05-15 01:08:27.792226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.525 [2024-05-15 01:08:27.792265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:42:24.525 [2024-05-15 01:08:27.799393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e0630 00:42:24.525 [2024-05-15 01:08:27.800258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.525 [2024-05-15 01:08:27.800295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:42:24.785 [2024-05-15 01:08:27.814058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e1b48 00:42:24.785 [2024-05-15 01:08:27.815390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.785 [2024-05-15 01:08:27.815445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:42:24.785 [2024-05-15 01:08:27.825254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f7538 00:42:24.785 [2024-05-15 01:08:27.826405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.785 [2024-05-15 01:08:27.826442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:24.785 [2024-05-15 01:08:27.837545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190efae0 00:42:24.785 [2024-05-15 01:08:27.839016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.785 [2024-05-15 01:08:27.839057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:42:24.785 [2024-05-15 01:08:27.849475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f20d8 00:42:24.785 [2024-05-15 01:08:27.850581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.785 [2024-05-15 01:08:27.850651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:24.785 [2024-05-15 01:08:27.860780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e49b0 00:42:24.785 [2024-05-15 01:08:27.861677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.785 [2024-05-15 01:08:27.861715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:42:24.785 [2024-05-15 01:08:27.872271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f57b0 00:42:24.785 [2024-05-15 01:08:27.872991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.785 [2024-05-15 01:08:27.873024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:24.785 [2024-05-15 01:08:27.883956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ed920 00:42:24.785 [2024-05-15 01:08:27.884590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.785 [2024-05-15 01:08:27.884639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:42:24.785 [2024-05-15 01:08:27.897548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f9b30 00:42:24.785 [2024-05-15 01:08:27.898945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.785 [2024-05-15 01:08:27.898995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:42:24.785 [2024-05-15 01:08:27.908437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f3e60 00:42:24.785 [2024-05-15 01:08:27.909887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.785 [2024-05-15 01:08:27.909925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:42:24.785 [2024-05-15 01:08:27.920159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f7da8 00:42:24.785 [2024-05-15 01:08:27.921337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.785 [2024-05-15 01:08:27.921377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:42:24.785 [2024-05-15 01:08:27.934692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ebb98 00:42:24.785 [2024-05-15 01:08:27.936532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.785 [2024-05-15 01:08:27.936582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:42:24.785 [2024-05-15 01:08:27.943218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f1ca0 00:42:24.785 [2024-05-15 01:08:27.944127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.785 [2024-05-15 01:08:27.944166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:42:24.785 [2024-05-15 01:08:27.958001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fe2e8 00:42:24.786 [2024-05-15 01:08:27.959588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.786 [2024-05-15 01:08:27.959643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:42:24.786 [2024-05-15 01:08:27.969274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e5a90 00:42:24.786 [2024-05-15 01:08:27.970783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.786 [2024-05-15 01:08:27.970824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:42:24.786 [2024-05-15 01:08:27.980914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190eb760 00:42:24.786 [2024-05-15 01:08:27.982168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.786 [2024-05-15 01:08:27.982210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:42:24.786 [2024-05-15 01:08:27.995461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fd208 00:42:24.786 [2024-05-15 01:08:27.997387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.786 [2024-05-15 01:08:27.997425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:42:24.786 [2024-05-15 01:08:28.004184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f4f40 00:42:24.786 [2024-05-15 01:08:28.005163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.786 [2024-05-15 01:08:28.005199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:42:24.786 [2024-05-15 01:08:28.018782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f0788 00:42:24.786 [2024-05-15 01:08:28.020468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.786 [2024-05-15 01:08:28.020515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:24.786 [2024-05-15 01:08:28.030393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f92c0 00:42:24.786 [2024-05-15 01:08:28.032111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.786 [2024-05-15 01:08:28.032168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:24.786 [2024-05-15 01:08:28.042488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e0a68 00:42:24.786 [2024-05-15 01:08:28.043881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.786 [2024-05-15 01:08:28.043920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:42:24.786 [2024-05-15 01:08:28.053787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190df988 00:42:24.786 [2024-05-15 01:08:28.055366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.786 [2024-05-15 01:08:28.055423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:42:24.786 [2024-05-15 01:08:28.065733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ef270 00:42:24.786 [2024-05-15 01:08:28.066850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:24.786 [2024-05-15 01:08:28.066888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.078406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fc128 00:42:25.046 [2024-05-15 01:08:28.079737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.079775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.093316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fa7d8 00:42:25.046 [2024-05-15 01:08:28.095230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.095268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.101929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e73e0 00:42:25.046 [2024-05-15 01:08:28.102879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.102918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.113936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fe2e8 00:42:25.046 [2024-05-15 01:08:28.114884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.114924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.127832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f1868 00:42:25.046 [2024-05-15 01:08:28.129413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.129452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.137251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190eea00 00:42:25.046 [2024-05-15 01:08:28.138176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.138213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.151638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ecc78 00:42:25.046 [2024-05-15 01:08:28.153215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.153262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.163017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190feb58 00:42:25.046 [2024-05-15 01:08:28.164250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.164291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.175401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e4de8 00:42:25.046 [2024-05-15 01:08:28.176756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.176794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.188244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190efae0 00:42:25.046 [2024-05-15 01:08:28.189574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.189648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.201205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f4298 00:42:25.046 [2024-05-15 01:08:28.202633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.202671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.213487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f7da8 00:42:25.046 [2024-05-15 01:08:28.214394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.214434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.225779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e38d0 00:42:25.046 [2024-05-15 01:08:28.227137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.227179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.240707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ea680 00:42:25.046 [2024-05-15 01:08:28.242716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.242755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.249303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f0788 00:42:25.046 [2024-05-15 01:08:28.250376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.250415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.264043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ed0b0 00:42:25.046 [2024-05-15 01:08:28.265765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.046 [2024-05-15 01:08:28.265813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:42:25.046 [2024-05-15 01:08:28.272672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f8a50 00:42:25.046 [2024-05-15 01:08:28.273440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.047 [2024-05-15 01:08:28.273477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:42:25.047 [2024-05-15 01:08:28.288176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e95a0 00:42:25.047 [2024-05-15 01:08:28.289884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.047 [2024-05-15 01:08:28.289921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:42:25.047 [2024-05-15 01:08:28.296902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f7538 00:42:25.047 [2024-05-15 01:08:28.297657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.047 [2024-05-15 01:08:28.297693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:42:25.047 [2024-05-15 01:08:28.311724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f4f40 00:42:25.047 [2024-05-15 01:08:28.312952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.047 [2024-05-15 01:08:28.313001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:42:25.047 [2024-05-15 01:08:28.324454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ee5c8 00:42:25.047 [2024-05-15 01:08:28.326109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.047 [2024-05-15 01:08:28.326146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:42:25.304 [2024-05-15 01:08:28.334198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ed920 00:42:25.304 [2024-05-15 01:08:28.335151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.304 [2024-05-15 01:08:28.335191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:42:25.304 [2024-05-15 01:08:28.349230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190eb328 00:42:25.304 [2024-05-15 01:08:28.350815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.304 [2024-05-15 01:08:28.350852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:42:25.304 [2024-05-15 01:08:28.360512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190feb58 00:42:25.304 [2024-05-15 01:08:28.362507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.304 [2024-05-15 01:08:28.362545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:42:25.304 [2024-05-15 01:08:28.374024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190eb760 00:42:25.304 [2024-05-15 01:08:28.375070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.304 [2024-05-15 01:08:28.375109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:42:25.304 [2024-05-15 01:08:28.385775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fb480 00:42:25.304 [2024-05-15 01:08:28.386559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.304 [2024-05-15 01:08:28.386608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:42:25.304 [2024-05-15 01:08:28.397472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ec408 00:42:25.305 [2024-05-15 01:08:28.398197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.398236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.412833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ed4e8 00:42:25.305 [2024-05-15 01:08:28.414801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.414838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.421758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fac10 00:42:25.305 [2024-05-15 01:08:28.422717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.422773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.436422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190df550 00:42:25.305 [2024-05-15 01:08:28.437992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.438029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.448234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190eee38 00:42:25.305 [2024-05-15 01:08:28.449749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.449804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.460333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fac10 00:42:25.305 [2024-05-15 01:08:28.461640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.461684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.475256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e5ec8 00:42:25.305 [2024-05-15 01:08:28.477217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.477268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.484058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f81e0 00:42:25.305 [2024-05-15 01:08:28.485094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.485139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.499676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f2948 00:42:25.305 [2024-05-15 01:08:28.501709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.501760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.508678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e3498 00:42:25.305 [2024-05-15 01:08:28.509749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.509784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.522914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190de470 00:42:25.305 [2024-05-15 01:08:28.524355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.524396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.532467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e38d0 00:42:25.305 [2024-05-15 01:08:28.533194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.533233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.544665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e8d30 00:42:25.305 [2024-05-15 01:08:28.545404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.545440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.559589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190df118 00:42:25.305 [2024-05-15 01:08:28.561026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.561081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.571167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fe720 00:42:25.305 [2024-05-15 01:08:28.572375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.572411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:42:25.305 [2024-05-15 01:08:28.582552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e8d30 00:42:25.305 [2024-05-15 01:08:28.583656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.305 [2024-05-15 01:08:28.583695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:42:25.563 [2024-05-15 01:08:28.594315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ea248 00:42:25.563 [2024-05-15 01:08:28.595230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.563 [2024-05-15 01:08:28.595275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:42:25.563 [2024-05-15 01:08:28.606404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f4f40 00:42:25.563 [2024-05-15 01:08:28.607152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.563 [2024-05-15 01:08:28.607205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:42:25.563 [2024-05-15 01:08:28.618919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190eaab8 00:42:25.563 [2024-05-15 01:08:28.620013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.563 [2024-05-15 01:08:28.620065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.633887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e88f8 00:42:25.564 [2024-05-15 01:08:28.635609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.635658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.646048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e3060 00:42:25.564 [2024-05-15 01:08:28.647831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.647870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.654866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f3e60 00:42:25.564 [2024-05-15 01:08:28.655830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.655869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.666901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190ddc00 00:42:25.564 [2024-05-15 01:08:28.667856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.667895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.678487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e73e0 00:42:25.564 [2024-05-15 01:08:28.679364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.679414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.692948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e5658 00:42:25.564 [2024-05-15 01:08:28.693952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.693990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.704378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e9e10 00:42:25.564 [2024-05-15 01:08:28.705244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.705282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.715613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e49b0 00:42:25.564 [2024-05-15 01:08:28.716298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.716335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.729144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fd208 00:42:25.564 [2024-05-15 01:08:28.730641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.730677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.739674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f6458 00:42:25.564 [2024-05-15 01:08:28.740916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.740955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.752000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f2948 00:42:25.564 [2024-05-15 01:08:28.753602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.753656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.763827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fb8b8 00:42:25.564 [2024-05-15 01:08:28.764988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.765055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.775140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f8a50 00:42:25.564 [2024-05-15 01:08:28.776124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.776172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.787160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190e4578 00:42:25.564 [2024-05-15 01:08:28.788274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.788312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.799460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f31b8 00:42:25.564 [2024-05-15 01:08:28.800671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.800714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.812157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190fa7d8 00:42:25.564 [2024-05-15 01:08:28.813218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.813277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.826844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f8e88 00:42:25.564 [2024-05-15 01:08:28.828815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.828871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.835519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f35f0 00:42:25.564 [2024-05-15 01:08:28.836540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.564 [2024-05-15 01:08:28.836603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:42:25.564 [2024-05-15 01:08:28.850244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa29ce0) with pdu=0x2000190f0788 00:42:25.823 [2024-05-15 01:08:28.851927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:25.823 [2024-05-15 01:08:28.851967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:42:25.823 00:42:25.823 Latency(us) 00:42:25.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:25.823 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:25.823 nvme0n1 : 2.01 20949.34 81.83 0.00 0.00 6100.58 2561.86 19541.64 00:42:25.823 =================================================================================================================== 00:42:25.823 Total : 20949.34 81.83 0.00 0.00 6100.58 2561.86 19541.64 00:42:25.823 0 00:42:25.823 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:42:25.823 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:42:25.823 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:42:25.823 | .driver_specific 00:42:25.823 | .nvme_error 00:42:25.823 | .status_code 00:42:25.823 | .command_transient_transport_error' 00:42:25.823 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:42:26.083 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:42:26.083 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 111923 00:42:26.083 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 111923 ']' 00:42:26.083 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 111923 00:42:26.083 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:42:26.083 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:26.083 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 111923 00:42:26.083 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:26.083 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:26.083 killing process with pid 111923 00:42:26.083 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 111923' 00:42:26.083 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 111923 00:42:26.083 Received shutdown signal, test time was about 2.000000 seconds 00:42:26.083 00:42:26.083 Latency(us) 00:42:26.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:26.083 =================================================================================================================== 00:42:26.083 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:26.083 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 111923 00:42:26.342 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:42:26.342 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:42:26.342 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:42:26.342 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:42:26.342 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:42:26.342 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112008 00:42:26.343 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112008 /var/tmp/bperf.sock 00:42:26.343 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:42:26.343 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 112008 ']' 00:42:26.343 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:26.343 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:26.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:26.343 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:26.343 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:26.343 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:26.343 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:26.343 Zero copy mechanism will not be used. 00:42:26.343 [2024-05-15 01:08:29.551241] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:42:26.343 [2024-05-15 01:08:29.551370] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112008 ] 00:42:26.601 [2024-05-15 01:08:29.685059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:26.601 [2024-05-15 01:08:29.804011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:27.552 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:27.553 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:42:27.553 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:27.553 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:27.553 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:42:27.553 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.553 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:27.553 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.553 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:27.553 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:28.121 nvme0n1 00:42:28.121 01:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:42:28.121 01:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:28.121 01:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:28.121 01:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:28.121 01:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:42:28.121 01:08:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:28.121 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:28.121 Zero copy mechanism will not be used. 00:42:28.121 Running I/O for 2 seconds... 00:42:28.121 [2024-05-15 01:08:31.284475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.284801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.284861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.290418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.290725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.290766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.296182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.296470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.296505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.302257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.302545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.302581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.308064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.308353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.308388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.313816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.314105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.314139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.319666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.319955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.319988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.325543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.325853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.325888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.331383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.331686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.331721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.337199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.337510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.337545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.343118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.343456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.343490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.349135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.349439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.349472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.355116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.355406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.355439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.361147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.361434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.361468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.366982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.367270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.367306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.372709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.121 [2024-05-15 01:08:31.372999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.121 [2024-05-15 01:08:31.373033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.121 [2024-05-15 01:08:31.378505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.122 [2024-05-15 01:08:31.378807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.122 [2024-05-15 01:08:31.378841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.122 [2024-05-15 01:08:31.384287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.122 [2024-05-15 01:08:31.384588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.122 [2024-05-15 01:08:31.384633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.122 [2024-05-15 01:08:31.390131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.122 [2024-05-15 01:08:31.390420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.122 [2024-05-15 01:08:31.390453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.122 [2024-05-15 01:08:31.396111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.122 [2024-05-15 01:08:31.396416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.122 [2024-05-15 01:08:31.396450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.122 [2024-05-15 01:08:31.402049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.122 [2024-05-15 01:08:31.402360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.122 [2024-05-15 01:08:31.402393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.382 [2024-05-15 01:08:31.408137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.382 [2024-05-15 01:08:31.408427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.382 [2024-05-15 01:08:31.408461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.382 [2024-05-15 01:08:31.413805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.382 [2024-05-15 01:08:31.414088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.382 [2024-05-15 01:08:31.414122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.382 [2024-05-15 01:08:31.419422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.382 [2024-05-15 01:08:31.419716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.382 [2024-05-15 01:08:31.419749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.382 [2024-05-15 01:08:31.425187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.382 [2024-05-15 01:08:31.425482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.382 [2024-05-15 01:08:31.425516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.382 [2024-05-15 01:08:31.430892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.382 [2024-05-15 01:08:31.431198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.382 [2024-05-15 01:08:31.431232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.382 [2024-05-15 01:08:31.436703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.382 [2024-05-15 01:08:31.436992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.382 [2024-05-15 01:08:31.437040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.382 [2024-05-15 01:08:31.442510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.382 [2024-05-15 01:08:31.442837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.382 [2024-05-15 01:08:31.442888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.382 [2024-05-15 01:08:31.448714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.382 [2024-05-15 01:08:31.449087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.382 [2024-05-15 01:08:31.449119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.382 [2024-05-15 01:08:31.454789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.382 [2024-05-15 01:08:31.455088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.382 [2024-05-15 01:08:31.455121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.382 [2024-05-15 01:08:31.460666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.382 [2024-05-15 01:08:31.460941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.382 [2024-05-15 01:08:31.460974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.382 [2024-05-15 01:08:31.466675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.382 [2024-05-15 01:08:31.466960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.382 [2024-05-15 01:08:31.466993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.382 [2024-05-15 01:08:31.472505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.382 [2024-05-15 01:08:31.472810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.472844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.478099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.478387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.478422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.484061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.484328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.484360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.489620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.489892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.489926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.495378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.495676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.495711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.501053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.501341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.501375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.506831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.507118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.507152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.512476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.512777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.512811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.518224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.518502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.518536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.523932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.524237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.524271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.529790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.530065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.530098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.535374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.535677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.535704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.541111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.541385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.541411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.546875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.547174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.547208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.552613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.552900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.552934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.558289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.558559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.558592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.563864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.564166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.564200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.569577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.569860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.569893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.575212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.575487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.575520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.580790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.581071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.581105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.586243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.586523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.586557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.591871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.592178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.592213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.597454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.597743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.597779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.603005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.603283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.603319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.383 [2024-05-15 01:08:31.608661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.383 [2024-05-15 01:08:31.608936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.383 [2024-05-15 01:08:31.608970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.384 [2024-05-15 01:08:31.614635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.384 [2024-05-15 01:08:31.614935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.384 [2024-05-15 01:08:31.614997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.384 [2024-05-15 01:08:31.620426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.384 [2024-05-15 01:08:31.620732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.384 [2024-05-15 01:08:31.620766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.384 [2024-05-15 01:08:31.626198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.384 [2024-05-15 01:08:31.626487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.384 [2024-05-15 01:08:31.626521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.384 [2024-05-15 01:08:31.631863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.384 [2024-05-15 01:08:31.632147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.384 [2024-05-15 01:08:31.632180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.384 [2024-05-15 01:08:31.637537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.384 [2024-05-15 01:08:31.637841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.384 [2024-05-15 01:08:31.637874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.384 [2024-05-15 01:08:31.643275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.384 [2024-05-15 01:08:31.643550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.384 [2024-05-15 01:08:31.643584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.384 [2024-05-15 01:08:31.648833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.384 [2024-05-15 01:08:31.649107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.384 [2024-05-15 01:08:31.649140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.384 [2024-05-15 01:08:31.654464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.384 [2024-05-15 01:08:31.654762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.384 [2024-05-15 01:08:31.654796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.384 [2024-05-15 01:08:31.660525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.384 [2024-05-15 01:08:31.660839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.384 [2024-05-15 01:08:31.660873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.384 [2024-05-15 01:08:31.666268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.384 [2024-05-15 01:08:31.666570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.384 [2024-05-15 01:08:31.666614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.672584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.672902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.672936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.678873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.679187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.679221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.684850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.685175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.685209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.690772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.691070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.691103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.696837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.697133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.697166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.702705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.703019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.703052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.708666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.708954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.709014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.714567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.714881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.714916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.720369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.720674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.720707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.726034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.726318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.726352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.731852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.732154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.732213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.737936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.738241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.738275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.743879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.744173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.744207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.749890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.750166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.644 [2024-05-15 01:08:31.750202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.644 [2024-05-15 01:08:31.755769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.644 [2024-05-15 01:08:31.756046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.756086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.761419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.761724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.761758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.767327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.767619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.767666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.773173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.773422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.773454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.778412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.778686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.778722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.783881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.784143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.784177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.789406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.789709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.789752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.795159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.795447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.795483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.800388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.800666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.800699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.805492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.805797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.805830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.810835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.811152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.811188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.816252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.816452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.816499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.821634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.821858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.821885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.827266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.827495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.827520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.833040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.833275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.833320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.838863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.839092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.839126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.844258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.844511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.844546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.849924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.850156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.850189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.855637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.855838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.855865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.861039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.861250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.861284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.866664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.866881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.866914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.872076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.872273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.872299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.877489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.877710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.877735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.882715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.882923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.882964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.645 [2024-05-15 01:08:31.888122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.645 [2024-05-15 01:08:31.888345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.645 [2024-05-15 01:08:31.888379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.646 [2024-05-15 01:08:31.893540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.646 [2024-05-15 01:08:31.893772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.646 [2024-05-15 01:08:31.893804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.646 [2024-05-15 01:08:31.898859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.646 [2024-05-15 01:08:31.899085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.646 [2024-05-15 01:08:31.899112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.646 [2024-05-15 01:08:31.904383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.646 [2024-05-15 01:08:31.904620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.646 [2024-05-15 01:08:31.904677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.646 [2024-05-15 01:08:31.909779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.646 [2024-05-15 01:08:31.909979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.646 [2024-05-15 01:08:31.910012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.646 [2024-05-15 01:08:31.915008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.646 [2024-05-15 01:08:31.915209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.646 [2024-05-15 01:08:31.915244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.646 [2024-05-15 01:08:31.920544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.646 [2024-05-15 01:08:31.920753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.646 [2024-05-15 01:08:31.920786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.646 [2024-05-15 01:08:31.926198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.646 [2024-05-15 01:08:31.926395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.646 [2024-05-15 01:08:31.926421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.905 [2024-05-15 01:08:31.931645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.905 [2024-05-15 01:08:31.931899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.905 [2024-05-15 01:08:31.931925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:31.937079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:31.937281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:31.937308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:31.942677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:31.942907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:31.942939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:31.948128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:31.948347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:31.948396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:31.953582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:31.953795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:31.953821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:31.959008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:31.959207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:31.959232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:31.964735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:31.964952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:31.964979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:31.970288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:31.970503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:31.970529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:31.975512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:31.975731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:31.975766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:31.980844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:31.981079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:31.981112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:31.986207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:31.986440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:31.986474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:31.991662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:31.991880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:31.991914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:31.996926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:31.997140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:31.997174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:32.002646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:32.002869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:32.002902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:32.008178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:32.008375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:32.008433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:32.013414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:32.013606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:32.013632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:32.018674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:32.018875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:32.018902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:32.023923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:32.024123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:32.024149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:32.029154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:32.029369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:32.029402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:32.034469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:32.034661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:32.034687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:32.040200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:32.040387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:32.040412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:32.045811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:32.046060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:32.046092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:32.051380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:32.051553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:32.051593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:32.057174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:32.057350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.906 [2024-05-15 01:08:32.057374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.906 [2024-05-15 01:08:32.062444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.906 [2024-05-15 01:08:32.062640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.062668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.067958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.068117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.068143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.073183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.073358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.073384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.078345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.078528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.078555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.083976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.084151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.084178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.089394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.089630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.089671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.095320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.095473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.095499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.100759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.100966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.100991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.106168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.106338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.106363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.111555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.111753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.111794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.117025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.117236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.117269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.122575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.122760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.122788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.127829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.127987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.128013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.133132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.133326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.133352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.138744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.138931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.138969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.144294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.144444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.144470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.149520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.149722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.149748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.154871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.155036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.155062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.160166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.160379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.160404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.165334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.165524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.165550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.170542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.170727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.170754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.175779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.175968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.175994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.181360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.181525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.181551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.907 [2024-05-15 01:08:32.187014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:28.907 [2024-05-15 01:08:32.187169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.907 [2024-05-15 01:08:32.187194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.167 [2024-05-15 01:08:32.192915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.167 [2024-05-15 01:08:32.193115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.167 [2024-05-15 01:08:32.193157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.167 [2024-05-15 01:08:32.198982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.167 [2024-05-15 01:08:32.199179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.167 [2024-05-15 01:08:32.199212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.167 [2024-05-15 01:08:32.205115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.167 [2024-05-15 01:08:32.205293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.167 [2024-05-15 01:08:32.205335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.167 [2024-05-15 01:08:32.211033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.167 [2024-05-15 01:08:32.211187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.167 [2024-05-15 01:08:32.211230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.167 [2024-05-15 01:08:32.216745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.167 [2024-05-15 01:08:32.216898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.167 [2024-05-15 01:08:32.216924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.167 [2024-05-15 01:08:32.222779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.167 [2024-05-15 01:08:32.222948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.167 [2024-05-15 01:08:32.222999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.167 [2024-05-15 01:08:32.228653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.167 [2024-05-15 01:08:32.228848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.167 [2024-05-15 01:08:32.228874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.167 [2024-05-15 01:08:32.234302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.167 [2024-05-15 01:08:32.234470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.167 [2024-05-15 01:08:32.234495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.240008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.240186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.240213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.246032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.246205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.246231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.251723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.251879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.251906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.257476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.257677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.257705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.263164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.263338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.263373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.269188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.269404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.269439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.275200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.275430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.275455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.281221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.281391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.281418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.286825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.287020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.287046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.292835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.292995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.293021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.298895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.299090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.299115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.304881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.305051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.305077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.310904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.311106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.311132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.316576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.316746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.316772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.322409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.322613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.322640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.328261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.328416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.328443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.334070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.334227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.334255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.339356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.339533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.339559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.344500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.344672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.344698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.349658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.349813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.349839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.354835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.355010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.355037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.360031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.360222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.360249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.365207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.365397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.365423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.370316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.370470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.370496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.375762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.375928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.375954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.380971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.381124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.381150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.168 [2024-05-15 01:08:32.386085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.168 [2024-05-15 01:08:32.386269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.168 [2024-05-15 01:08:32.386296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.169 [2024-05-15 01:08:32.391628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.169 [2024-05-15 01:08:32.391817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.169 [2024-05-15 01:08:32.391844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.169 [2024-05-15 01:08:32.396859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.169 [2024-05-15 01:08:32.397030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.169 [2024-05-15 01:08:32.397056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.169 [2024-05-15 01:08:32.402214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.169 [2024-05-15 01:08:32.402385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.169 [2024-05-15 01:08:32.402423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.169 [2024-05-15 01:08:32.407473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.169 [2024-05-15 01:08:32.407660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.169 [2024-05-15 01:08:32.407686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.169 [2024-05-15 01:08:32.412956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.169 [2024-05-15 01:08:32.413127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.169 [2024-05-15 01:08:32.413154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.169 [2024-05-15 01:08:32.418207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.169 [2024-05-15 01:08:32.418397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.169 [2024-05-15 01:08:32.418423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.169 [2024-05-15 01:08:32.423572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.169 [2024-05-15 01:08:32.423763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.169 [2024-05-15 01:08:32.423798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.169 [2024-05-15 01:08:32.428860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.169 [2024-05-15 01:08:32.429029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.169 [2024-05-15 01:08:32.429056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.169 [2024-05-15 01:08:32.434081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.169 [2024-05-15 01:08:32.434253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.169 [2024-05-15 01:08:32.434279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.169 [2024-05-15 01:08:32.439381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.169 [2024-05-15 01:08:32.439569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.169 [2024-05-15 01:08:32.439626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.169 [2024-05-15 01:08:32.444512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.169 [2024-05-15 01:08:32.444691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.169 [2024-05-15 01:08:32.444718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.169 [2024-05-15 01:08:32.449692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.169 [2024-05-15 01:08:32.449847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.169 [2024-05-15 01:08:32.449873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.454819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.455032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.455058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.459919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.460013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.460037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.465071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.465189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.465215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.470331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.470476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.470501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.475488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.475719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.475745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.480669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.480804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.480828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.485827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.485911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.485937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.490948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.491053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.491078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.496048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.496174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.496199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.501147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.501289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.501314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.506340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.506457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.506482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.511433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.511530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.511556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.516561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.516687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.516713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.521809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.521934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.521959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.527012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.527116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.527142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.532324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.532524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.532547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.537513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.537752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.537789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.542762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.542897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.542921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.547863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.430 [2024-05-15 01:08:32.547971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.430 [2024-05-15 01:08:32.548013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.430 [2024-05-15 01:08:32.553100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.553204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.553228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.558277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.558420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.558446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.563514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.563685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.563711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.568741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.568964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.568988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.574013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.574145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.574171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.579202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.579318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.579343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.584446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.584598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.584624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.589621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.589734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.589759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.594734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.594855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.594881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.599907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.600054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.600079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.604995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.605086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.605113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.610142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.610265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.610288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.615257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.615382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.615408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.620508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.620628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.620685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.625789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.625915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.625940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.630938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.631058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.631084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.636195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.636315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.636339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.641493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.641650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.641676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.646718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.647036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.431 [2024-05-15 01:08:32.647062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.431 [2024-05-15 01:08:32.651969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.431 [2024-05-15 01:08:32.652165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.432 [2024-05-15 01:08:32.652198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.432 [2024-05-15 01:08:32.657029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.432 [2024-05-15 01:08:32.657155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.432 [2024-05-15 01:08:32.657181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.432 [2024-05-15 01:08:32.662281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.432 [2024-05-15 01:08:32.662403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.432 [2024-05-15 01:08:32.662430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.432 [2024-05-15 01:08:32.667433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.432 [2024-05-15 01:08:32.667661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.432 [2024-05-15 01:08:32.667708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.432 [2024-05-15 01:08:32.672707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.432 [2024-05-15 01:08:32.672848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.432 [2024-05-15 01:08:32.672873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.432 [2024-05-15 01:08:32.677773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.432 [2024-05-15 01:08:32.677866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.432 [2024-05-15 01:08:32.677891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.432 [2024-05-15 01:08:32.683069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.432 [2024-05-15 01:08:32.683182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.432 [2024-05-15 01:08:32.683208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.432 [2024-05-15 01:08:32.688295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.432 [2024-05-15 01:08:32.688407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.432 [2024-05-15 01:08:32.688431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.432 [2024-05-15 01:08:32.693492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.432 [2024-05-15 01:08:32.693608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.432 [2024-05-15 01:08:32.693648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.432 [2024-05-15 01:08:32.698585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.432 [2024-05-15 01:08:32.698705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.432 [2024-05-15 01:08:32.698748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.432 [2024-05-15 01:08:32.703838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.432 [2024-05-15 01:08:32.703927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.432 [2024-05-15 01:08:32.703952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.432 [2024-05-15 01:08:32.709018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.432 [2024-05-15 01:08:32.709093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.432 [2024-05-15 01:08:32.709116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.432 [2024-05-15 01:08:32.714098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.432 [2024-05-15 01:08:32.714280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.432 [2024-05-15 01:08:32.714314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.692 [2024-05-15 01:08:32.719275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.692 [2024-05-15 01:08:32.719412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.719436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.724514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.724607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.724634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.729736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.729819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.729845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.734869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.734997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.735021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.740067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.740180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.740205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.745333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.745423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.745448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.750655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.750760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.750786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.755858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.755952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.755976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.761087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.761218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.761242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.766316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.766474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.766514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.771500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.771633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.771670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.776673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.776756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.776781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.781830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.781928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.781953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.787177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.787323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.787347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.792441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.792515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.792540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.797681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.797754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.797779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.802881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.802994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.803024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.808102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.808218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.808244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.813333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.813438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.813463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.818711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.818818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.818843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.823932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.824083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.824108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.829108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.829259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.829283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.834535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.834648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.834688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.839892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.840010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.693 [2024-05-15 01:08:32.840036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.693 [2024-05-15 01:08:32.845330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.693 [2024-05-15 01:08:32.845425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.845450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.850670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.850790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.850816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.856146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.856230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.856255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.861684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.861813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.861837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.867231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.867355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.867379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.872651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.872803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.872828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.878146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.878252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.878278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.883559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.883696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.883720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.889048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.889120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.889145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.894385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.894512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.894535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.899816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.899936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.899961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.905329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.905446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.905470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.910534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.910683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.910709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.915932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.916097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.916121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.921302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.921459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.921484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.926645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.926772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.926797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.932132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.932270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.932295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.937431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.937542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.937567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.942664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.942770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.942795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.947939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.948061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.948086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.953200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.953330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.953356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.958479] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.958627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.958677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.963764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.963880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.963906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.969082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.969216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.969240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.694 [2024-05-15 01:08:32.974224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.694 [2024-05-15 01:08:32.974360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.694 [2024-05-15 01:08:32.974402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:32.979406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:32.979550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:32.979574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:32.984780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:32.984905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:32.984931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:32.990369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:32.990486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:32.990511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:32.995606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:32.995735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:32.995760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:33.000948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:33.001071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:33.001096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:33.006613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:33.006751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:33.006778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:33.011936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:33.012063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:33.012088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:33.017538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:33.017696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:33.017722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:33.023024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:33.023152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:33.023179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:33.028438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:33.028609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:33.028636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:33.033754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:33.033863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:33.033889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:33.039010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:33.039121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:33.039146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:33.044187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:33.044296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:33.044322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:33.049490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:33.049637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:33.049664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:33.054906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:33.055051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:33.055077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:33.060240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:33.060383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:33.060424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.954 [2024-05-15 01:08:33.065585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.954 [2024-05-15 01:08:33.065744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.954 [2024-05-15 01:08:33.065770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.070861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.071043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.071071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.076186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.076293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.076318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.081477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.081599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.081662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.086583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.086744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.086769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.091827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.091945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.091971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.096927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.097051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.097077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.102188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.102320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.102347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.107391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.107525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.107551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.112425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.112569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.112611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.117745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.117856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.117882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.123110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.123253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.123279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.128183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.128309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.128334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.133267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.133377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.133403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.138433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.138544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.138570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.143724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.143849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.143876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.148976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.149092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.149116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.154050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.154196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.154222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.159395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.159515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.159540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.164801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.164911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.164937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.169983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.170092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.170118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.175153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.175264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.175305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.955 [2024-05-15 01:08:33.180346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.955 [2024-05-15 01:08:33.180452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.955 [2024-05-15 01:08:33.180478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.956 [2024-05-15 01:08:33.185508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.956 [2024-05-15 01:08:33.185649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.956 [2024-05-15 01:08:33.185676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.956 [2024-05-15 01:08:33.190710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.956 [2024-05-15 01:08:33.190820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.956 [2024-05-15 01:08:33.190845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.956 [2024-05-15 01:08:33.196024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.956 [2024-05-15 01:08:33.196140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.956 [2024-05-15 01:08:33.196167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.956 [2024-05-15 01:08:33.201249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.956 [2024-05-15 01:08:33.201386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.956 [2024-05-15 01:08:33.201410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.956 [2024-05-15 01:08:33.206391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.956 [2024-05-15 01:08:33.206527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.956 [2024-05-15 01:08:33.206552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.956 [2024-05-15 01:08:33.211617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.956 [2024-05-15 01:08:33.211724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.956 [2024-05-15 01:08:33.211751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.956 [2024-05-15 01:08:33.216874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.956 [2024-05-15 01:08:33.217008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.956 [2024-05-15 01:08:33.217034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:29.956 [2024-05-15 01:08:33.222124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.956 [2024-05-15 01:08:33.222276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.956 [2024-05-15 01:08:33.222303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:29.956 [2024-05-15 01:08:33.227290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.956 [2024-05-15 01:08:33.227424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.956 [2024-05-15 01:08:33.227449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:29.956 [2024-05-15 01:08:33.232541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.956 [2024-05-15 01:08:33.232720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.956 [2024-05-15 01:08:33.232747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:29.956 [2024-05-15 01:08:33.237801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:29.956 [2024-05-15 01:08:33.237913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:29.956 [2024-05-15 01:08:33.237938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:30.215 [2024-05-15 01:08:33.243036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:30.215 [2024-05-15 01:08:33.243176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.215 [2024-05-15 01:08:33.243203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:30.215 [2024-05-15 01:08:33.248329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:30.215 [2024-05-15 01:08:33.248477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.215 [2024-05-15 01:08:33.248503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:30.215 [2024-05-15 01:08:33.253580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:30.215 [2024-05-15 01:08:33.253729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.215 [2024-05-15 01:08:33.253756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:30.215 [2024-05-15 01:08:33.258656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:30.215 [2024-05-15 01:08:33.258764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.215 [2024-05-15 01:08:33.258790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:30.215 [2024-05-15 01:08:33.263920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:30.215 [2024-05-15 01:08:33.264060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.215 [2024-05-15 01:08:33.264086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:30.215 [2024-05-15 01:08:33.269320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:30.215 [2024-05-15 01:08:33.269431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.215 [2024-05-15 01:08:33.269455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:30.215 [2024-05-15 01:08:33.274570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbebd20) with pdu=0x2000190fef90 00:42:30.215 [2024-05-15 01:08:33.274707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.215 [2024-05-15 01:08:33.274732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:30.215 00:42:30.215 Latency(us) 00:42:30.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:30.215 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:42:30.215 nvme0n1 : 2.00 5656.42 707.05 0.00 0.00 2820.65 2263.97 7923.90 00:42:30.215 =================================================================================================================== 00:42:30.215 Total : 5656.42 707.05 0.00 0.00 2820.65 2263.97 7923.90 00:42:30.215 0 00:42:30.215 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:42:30.215 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:42:30.215 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:42:30.215 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:42:30.215 | .driver_specific 00:42:30.215 | .nvme_error 00:42:30.215 | .status_code 00:42:30.215 | .command_transient_transport_error' 00:42:30.474 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 365 > 0 )) 00:42:30.474 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112008 00:42:30.474 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 112008 ']' 00:42:30.474 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 112008 00:42:30.474 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:42:30.474 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:30.474 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 112008 00:42:30.474 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:30.474 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:30.474 killing process with pid 112008 00:42:30.474 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 112008' 00:42:30.474 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 112008 00:42:30.474 Received shutdown signal, test time was about 2.000000 seconds 00:42:30.474 00:42:30.474 Latency(us) 00:42:30.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:30.474 =================================================================================================================== 00:42:30.474 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:30.474 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 112008 00:42:30.732 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 111701 00:42:30.732 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 111701 ']' 00:42:30.732 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 111701 00:42:30.732 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:42:30.732 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:30.732 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 111701 00:42:30.732 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:42:30.732 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:42:30.732 killing process with pid 111701 00:42:30.732 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 111701' 00:42:30.732 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 111701 00:42:30.732 [2024-05-15 01:08:33.916028] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:42:30.732 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 111701 00:42:30.991 00:42:30.991 real 0m18.798s 00:42:30.991 user 0m35.428s 00:42:30.991 sys 0m5.278s 00:42:30.991 01:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # xtrace_disable 00:42:30.991 01:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:30.991 ************************************ 00:42:30.991 END TEST nvmf_digest_error 00:42:30.991 ************************************ 00:42:30.991 01:08:34 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:42:30.991 01:08:34 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:42:30.991 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:30.991 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:42:30.991 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:30.991 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:42:30.991 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:30.991 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:30.991 rmmod nvme_tcp 00:42:30.991 rmmod nvme_fabrics 00:42:30.991 rmmod nvme_keyring 00:42:30.991 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 111701 ']' 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 111701 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@947 -- # '[' -z 111701 ']' 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@951 -- # kill -0 111701 00:42:31.267 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (111701) - No such process 00:42:31.267 Process with pid 111701 is not found 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@974 -- # echo 'Process with pid 111701 is not found' 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:42:31.267 00:42:31.267 real 0m37.642s 00:42:31.267 user 1m10.128s 00:42:31.267 sys 0m10.163s 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:42:31.267 01:08:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:42:31.267 ************************************ 00:42:31.267 END TEST nvmf_digest 00:42:31.267 ************************************ 00:42:31.267 01:08:34 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:42:31.267 01:08:34 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:42:31.267 01:08:34 nvmf_tcp -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:42:31.267 01:08:34 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:42:31.267 01:08:34 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:42:31.267 01:08:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:31.267 ************************************ 00:42:31.267 START TEST nvmf_mdns_discovery 00:42:31.267 ************************************ 00:42:31.267 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:42:31.267 * Looking for test storage... 00:42:31.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:42:31.267 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:31.267 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:42:31.267 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:31.267 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:31.267 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:31.267 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:31.267 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:31.267 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:31.267 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:31.267 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:42:31.268 Cannot find device "nvmf_tgt_br" 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:42:31.268 Cannot find device "nvmf_tgt_br2" 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:42:31.268 Cannot find device "nvmf_tgt_br" 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:42:31.268 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:42:31.527 Cannot find device "nvmf_tgt_br2" 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:31.527 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:31.527 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:31.527 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:42:31.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:31.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:42:31.786 00:42:31.786 --- 10.0.0.2 ping statistics --- 00:42:31.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:31.786 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:42:31.786 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:31.786 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:42:31.786 00:42:31.786 --- 10.0.0.3 ping statistics --- 00:42:31.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:31.786 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:31.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:31.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:42:31.786 00:42:31.786 --- 10.0.0.1 ping statistics --- 00:42:31.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:31.786 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=112308 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 112308 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@828 -- # '[' -z 112308 ']' 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:31.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:31.786 01:08:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:31.786 [2024-05-15 01:08:34.919812] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:42:31.786 [2024-05-15 01:08:34.919917] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:31.786 [2024-05-15 01:08:35.061442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:32.044 [2024-05-15 01:08:35.175546] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:32.044 [2024-05-15 01:08:35.175630] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:32.044 [2024-05-15 01:08:35.175646] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:32.044 [2024-05-15 01:08:35.175656] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:32.044 [2024-05-15 01:08:35.175665] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:32.044 [2024-05-15 01:08:35.175696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:32.611 01:08:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:32.611 01:08:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@861 -- # return 0 00:42:32.611 01:08:35 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:32.611 01:08:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:42:32.611 01:08:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:32.870 01:08:35 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:32.870 01:08:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:42:32.870 01:08:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.870 01:08:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:32.870 01:08:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.870 01:08:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:42:32.870 01:08:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.870 01:08:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:32.870 [2024-05-15 01:08:36.070713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:32.870 [2024-05-15 01:08:36.078605] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:42:32.870 [2024-05-15 01:08:36.078930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:32.870 null0 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:32.870 null1 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:32.870 null2 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:32.870 null3 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=112358 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 112358 /tmp/host.sock 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@828 -- # '[' -z 112358 ']' 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:42:32.870 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:32.870 01:08:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:33.128 [2024-05-15 01:08:36.181327] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:42:33.128 [2024-05-15 01:08:36.181438] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112358 ] 00:42:33.128 [2024-05-15 01:08:36.317330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:33.386 [2024-05-15 01:08:36.417007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:33.953 01:08:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:33.953 01:08:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@861 -- # return 0 00:42:33.953 01:08:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:42:33.953 01:08:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:42:33.953 01:08:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:42:34.210 01:08:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=112387 00:42:34.210 01:08:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:42:34.210 01:08:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:42:34.210 01:08:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:42:34.210 Process 1003 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:42:34.210 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:42:34.210 Successfully dropped root privileges. 00:42:34.210 avahi-daemon 0.8 starting up. 00:42:34.210 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:42:34.210 Successfully called chroot(). 00:42:34.210 Successfully dropped remaining capabilities. 00:42:34.210 No service file found in /etc/avahi/services. 00:42:35.145 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:42:35.145 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:42:35.145 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:42:35.145 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:42:35.145 Network interface enumeration completed. 00:42:35.145 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:42:35.145 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:42:35.145 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:42:35.145 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:42:35.145 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 1220649150. 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:42:35.145 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.404 [2024-05-15 01:08:38.579876] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.404 [2024-05-15 01:08:38.615690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.404 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.404 [2024-05-15 01:08:38.655625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:42:35.405 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.405 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:42:35.405 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.405 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.405 [2024-05-15 01:08:38.663542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:35.405 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.405 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:42:35.405 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:35.405 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:35.405 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:35.405 01:08:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:42:36.339 [2024-05-15 01:08:39.479889] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:42:36.905 [2024-05-15 01:08:40.079942] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:42:36.906 [2024-05-15 01:08:40.079989] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:42:36.906 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:42:36.906 cookie is 0 00:42:36.906 is_local: 1 00:42:36.906 our_own: 0 00:42:36.906 wide_area: 0 00:42:36.906 multicast: 1 00:42:36.906 cached: 1 00:42:36.906 [2024-05-15 01:08:40.179909] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:42:36.906 [2024-05-15 01:08:40.179958] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:42:36.906 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:42:36.906 cookie is 0 00:42:36.906 is_local: 1 00:42:36.906 our_own: 0 00:42:36.906 wide_area: 0 00:42:36.906 multicast: 1 00:42:36.906 cached: 1 00:42:36.906 [2024-05-15 01:08:40.179974] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:42:37.164 [2024-05-15 01:08:40.279904] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:42:37.164 [2024-05-15 01:08:40.279949] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:42:37.164 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:42:37.164 cookie is 0 00:42:37.164 is_local: 1 00:42:37.164 our_own: 0 00:42:37.164 wide_area: 0 00:42:37.164 multicast: 1 00:42:37.164 cached: 1 00:42:37.164 [2024-05-15 01:08:40.379902] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:42:37.164 [2024-05-15 01:08:40.379970] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:42:37.164 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:42:37.164 cookie is 0 00:42:37.164 is_local: 1 00:42:37.164 our_own: 0 00:42:37.164 wide_area: 0 00:42:37.164 multicast: 1 00:42:37.164 cached: 1 00:42:37.164 [2024-05-15 01:08:40.379984] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:42:38.101 [2024-05-15 01:08:41.084296] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:42:38.101 [2024-05-15 01:08:41.084342] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:42:38.101 [2024-05-15 01:08:41.084377] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:42:38.101 [2024-05-15 01:08:41.170433] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:42:38.101 [2024-05-15 01:08:41.226566] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:42:38.101 [2024-05-15 01:08:41.226614] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:42:38.101 [2024-05-15 01:08:41.283867] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:42:38.101 [2024-05-15 01:08:41.283900] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:42:38.101 [2024-05-15 01:08:41.283917] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:42:38.101 [2024-05-15 01:08:41.369991] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:42:38.359 [2024-05-15 01:08:41.425439] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:42:38.359 [2024-05-15 01:08:41.425487] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:42:40.893 01:08:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:40.893 01:08:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:42:41.830 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:42:41.830 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:42:41.830 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:41.830 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:41.830 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:41.830 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:42:41.830 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:42.089 [2024-05-15 01:08:45.210512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:42.089 [2024-05-15 01:08:45.211629] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:42:42.089 [2024-05-15 01:08:45.211668] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:42:42.089 [2024-05-15 01:08:45.211705] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:42:42.089 [2024-05-15 01:08:45.211720] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:42.089 [2024-05-15 01:08:45.218413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:42:42.089 [2024-05-15 01:08:45.218611] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:42:42.089 [2024-05-15 01:08:45.218671] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:42.089 01:08:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:42:42.089 [2024-05-15 01:08:45.351758] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:42:42.089 [2024-05-15 01:08:45.352019] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:42:42.348 [2024-05-15 01:08:45.409104] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:42:42.348 [2024-05-15 01:08:45.409148] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:42:42.348 [2024-05-15 01:08:45.409156] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:42:42.348 [2024-05-15 01:08:45.409177] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:42:42.348 [2024-05-15 01:08:45.409223] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:42:42.348 [2024-05-15 01:08:45.409233] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:42:42.348 [2024-05-15 01:08:45.409239] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:42:42.348 [2024-05-15 01:08:45.409253] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:42:42.348 [2024-05-15 01:08:45.454844] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:42:42.348 [2024-05-15 01:08:45.454880] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:42:42.348 [2024-05-15 01:08:45.454925] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:42:42.348 [2024-05-15 01:08:45.454934] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:43.290 [2024-05-15 01:08:46.527402] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:42:43.290 [2024-05-15 01:08:46.527455] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:42:43.290 [2024-05-15 01:08:46.527520] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:42:43.290 [2024-05-15 01:08:46.527542] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:43.290 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:43.290 [2024-05-15 01:08:46.534733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:42:43.290 [2024-05-15 01:08:46.534788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:43.290 [2024-05-15 01:08:46.534808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:42:43.290 [2024-05-15 01:08:46.534824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:43.290 [2024-05-15 01:08:46.534840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:42:43.290 [2024-05-15 01:08:46.534856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:43.290 [2024-05-15 01:08:46.534872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:42:43.290 [2024-05-15 01:08:46.534887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:43.290 [2024-05-15 01:08:46.534901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac4af0 is same with the state(5) to be set 00:42:43.290 [2024-05-15 01:08:46.535457] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:42:43.291 [2024-05-15 01:08:46.535571] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:42:43.291 [2024-05-15 01:08:46.537711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:42:43.291 [2024-05-15 01:08:46.537754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:43.291 [2024-05-15 01:08:46.537782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:42:43.291 [2024-05-15 01:08:46.537798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:43.291 [2024-05-15 01:08:46.537814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:42:43.291 [2024-05-15 01:08:46.537829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:43.291 [2024-05-15 01:08:46.537846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:42:43.291 [2024-05-15 01:08:46.537861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:43.291 [2024-05-15 01:08:46.537875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfc60 is same with the state(5) to be set 00:42:43.291 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:43.291 01:08:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:42:43.291 [2024-05-15 01:08:46.544667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac4af0 (9): Bad file descriptor 00:42:43.291 [2024-05-15 01:08:46.547666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabfc60 (9): Bad file descriptor 00:42:43.291 [2024-05-15 01:08:46.554700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:43.291 [2024-05-15 01:08:46.554870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.291 [2024-05-15 01:08:46.554967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.291 [2024-05-15 01:08:46.555000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac4af0 with addr=10.0.0.2, port=4420 00:42:43.291 [2024-05-15 01:08:46.555022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac4af0 is same with the state(5) to be set 00:42:43.291 [2024-05-15 01:08:46.555051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac4af0 (9): Bad file descriptor 00:42:43.291 [2024-05-15 01:08:46.555074] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:43.291 [2024-05-15 01:08:46.555087] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:43.291 [2024-05-15 01:08:46.555103] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:43.291 [2024-05-15 01:08:46.555127] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.291 [2024-05-15 01:08:46.557686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:42:43.291 [2024-05-15 01:08:46.557855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.291 [2024-05-15 01:08:46.557942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.291 [2024-05-15 01:08:46.557971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabfc60 with addr=10.0.0.3, port=4420 00:42:43.291 [2024-05-15 01:08:46.557996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfc60 is same with the state(5) to be set 00:42:43.291 [2024-05-15 01:08:46.558029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabfc60 (9): Bad file descriptor 00:42:43.291 [2024-05-15 01:08:46.558057] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:42:43.291 [2024-05-15 01:08:46.558074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:42:43.291 [2024-05-15 01:08:46.558093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:42:43.291 [2024-05-15 01:08:46.558121] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.291 [2024-05-15 01:08:46.564798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:43.291 [2024-05-15 01:08:46.564961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.291 [2024-05-15 01:08:46.565042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.291 [2024-05-15 01:08:46.565079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac4af0 with addr=10.0.0.2, port=4420 00:42:43.291 [2024-05-15 01:08:46.565100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac4af0 is same with the state(5) to be set 00:42:43.291 [2024-05-15 01:08:46.565137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac4af0 (9): Bad file descriptor 00:42:43.291 [2024-05-15 01:08:46.565166] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:43.291 [2024-05-15 01:08:46.565185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:43.291 [2024-05-15 01:08:46.565207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:43.291 [2024-05-15 01:08:46.565235] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.291 [2024-05-15 01:08:46.567789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:42:43.291 [2024-05-15 01:08:46.567909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.291 [2024-05-15 01:08:46.567975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.291 [2024-05-15 01:08:46.567998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabfc60 with addr=10.0.0.3, port=4420 00:42:43.291 [2024-05-15 01:08:46.568014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfc60 is same with the state(5) to be set 00:42:43.291 [2024-05-15 01:08:46.568037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabfc60 (9): Bad file descriptor 00:42:43.291 [2024-05-15 01:08:46.568056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:42:43.291 [2024-05-15 01:08:46.568070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:42:43.291 [2024-05-15 01:08:46.568083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:42:43.291 [2024-05-15 01:08:46.568104] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.291 [2024-05-15 01:08:46.574895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:43.291 [2024-05-15 01:08:46.575029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.291 [2024-05-15 01:08:46.575092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.291 [2024-05-15 01:08:46.575114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac4af0 with addr=10.0.0.2, port=4420 00:42:43.291 [2024-05-15 01:08:46.575130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac4af0 is same with the state(5) to be set 00:42:43.291 [2024-05-15 01:08:46.575152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac4af0 (9): Bad file descriptor 00:42:43.291 [2024-05-15 01:08:46.575171] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:43.291 [2024-05-15 01:08:46.575185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:43.291 [2024-05-15 01:08:46.575199] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:43.291 [2024-05-15 01:08:46.575218] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.552 [2024-05-15 01:08:46.577864] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:42:43.552 [2024-05-15 01:08:46.577998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.552 [2024-05-15 01:08:46.578060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.552 [2024-05-15 01:08:46.578083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabfc60 with addr=10.0.0.3, port=4420 00:42:43.552 [2024-05-15 01:08:46.578098] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfc60 is same with the state(5) to be set 00:42:43.552 [2024-05-15 01:08:46.578120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabfc60 (9): Bad file descriptor 00:42:43.552 [2024-05-15 01:08:46.578140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:42:43.552 [2024-05-15 01:08:46.578153] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:42:43.552 [2024-05-15 01:08:46.578166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:42:43.552 [2024-05-15 01:08:46.578186] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.552 [2024-05-15 01:08:46.584979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:43.552 [2024-05-15 01:08:46.585099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.552 [2024-05-15 01:08:46.585163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.552 [2024-05-15 01:08:46.585186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac4af0 with addr=10.0.0.2, port=4420 00:42:43.552 [2024-05-15 01:08:46.585201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac4af0 is same with the state(5) to be set 00:42:43.552 [2024-05-15 01:08:46.585224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac4af0 (9): Bad file descriptor 00:42:43.552 [2024-05-15 01:08:46.585243] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:43.552 [2024-05-15 01:08:46.585257] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:43.552 [2024-05-15 01:08:46.585270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:43.552 [2024-05-15 01:08:46.585291] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.552 [2024-05-15 01:08:46.587955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:42:43.552 [2024-05-15 01:08:46.588063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.552 [2024-05-15 01:08:46.588125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.552 [2024-05-15 01:08:46.588146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabfc60 with addr=10.0.0.3, port=4420 00:42:43.552 [2024-05-15 01:08:46.588161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfc60 is same with the state(5) to be set 00:42:43.552 [2024-05-15 01:08:46.588184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabfc60 (9): Bad file descriptor 00:42:43.552 [2024-05-15 01:08:46.588204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:42:43.552 [2024-05-15 01:08:46.588216] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:42:43.552 [2024-05-15 01:08:46.588230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:42:43.552 [2024-05-15 01:08:46.588249] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.552 [2024-05-15 01:08:46.595055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:43.552 [2024-05-15 01:08:46.595168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.552 [2024-05-15 01:08:46.595230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.552 [2024-05-15 01:08:46.595252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac4af0 with addr=10.0.0.2, port=4420 00:42:43.552 [2024-05-15 01:08:46.595267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac4af0 is same with the state(5) to be set 00:42:43.552 [2024-05-15 01:08:46.595288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac4af0 (9): Bad file descriptor 00:42:43.552 [2024-05-15 01:08:46.595308] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:43.552 [2024-05-15 01:08:46.595321] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:43.552 [2024-05-15 01:08:46.595334] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:43.552 [2024-05-15 01:08:46.595355] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.552 [2024-05-15 01:08:46.598023] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:42:43.552 [2024-05-15 01:08:46.598129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.552 [2024-05-15 01:08:46.598190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.552 [2024-05-15 01:08:46.598212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabfc60 with addr=10.0.0.3, port=4420 00:42:43.552 [2024-05-15 01:08:46.598228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfc60 is same with the state(5) to be set 00:42:43.552 [2024-05-15 01:08:46.598250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabfc60 (9): Bad file descriptor 00:42:43.552 [2024-05-15 01:08:46.598269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:42:43.552 [2024-05-15 01:08:46.598282] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:42:43.552 [2024-05-15 01:08:46.598296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:42:43.552 [2024-05-15 01:08:46.598315] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.552 [2024-05-15 01:08:46.605160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:43.552 [2024-05-15 01:08:46.605380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.552 [2024-05-15 01:08:46.605503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.552 [2024-05-15 01:08:46.605545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac4af0 with addr=10.0.0.2, port=4420 00:42:43.552 [2024-05-15 01:08:46.605570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac4af0 is same with the state(5) to be set 00:42:43.552 [2024-05-15 01:08:46.605631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac4af0 (9): Bad file descriptor 00:42:43.552 [2024-05-15 01:08:46.605707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:43.552 [2024-05-15 01:08:46.605730] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:43.552 [2024-05-15 01:08:46.605752] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:43.553 [2024-05-15 01:08:46.605788] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.553 [2024-05-15 01:08:46.608109] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:42:43.553 [2024-05-15 01:08:46.608275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.608386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.608428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabfc60 with addr=10.0.0.3, port=4420 00:42:43.553 [2024-05-15 01:08:46.608453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfc60 is same with the state(5) to be set 00:42:43.553 [2024-05-15 01:08:46.608494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabfc60 (9): Bad file descriptor 00:42:43.553 [2024-05-15 01:08:46.608526] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:42:43.553 [2024-05-15 01:08:46.608546] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:42:43.553 [2024-05-15 01:08:46.608568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:42:43.553 [2024-05-15 01:08:46.608625] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.553 [2024-05-15 01:08:46.615284] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:43.553 [2024-05-15 01:08:46.615406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.615454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.615471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac4af0 with addr=10.0.0.2, port=4420 00:42:43.553 [2024-05-15 01:08:46.615483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac4af0 is same with the state(5) to be set 00:42:43.553 [2024-05-15 01:08:46.615501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac4af0 (9): Bad file descriptor 00:42:43.553 [2024-05-15 01:08:46.615516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:43.553 [2024-05-15 01:08:46.615525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:43.553 [2024-05-15 01:08:46.615535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:43.553 [2024-05-15 01:08:46.615549] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.553 [2024-05-15 01:08:46.618197] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:42:43.553 [2024-05-15 01:08:46.618279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.618327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.618344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabfc60 with addr=10.0.0.3, port=4420 00:42:43.553 [2024-05-15 01:08:46.618354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfc60 is same with the state(5) to be set 00:42:43.553 [2024-05-15 01:08:46.618371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabfc60 (9): Bad file descriptor 00:42:43.553 [2024-05-15 01:08:46.618386] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:42:43.553 [2024-05-15 01:08:46.618395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:42:43.553 [2024-05-15 01:08:46.618405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:42:43.553 [2024-05-15 01:08:46.618420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.553 [2024-05-15 01:08:46.625364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:43.553 [2024-05-15 01:08:46.625488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.625538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.625555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac4af0 with addr=10.0.0.2, port=4420 00:42:43.553 [2024-05-15 01:08:46.625567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac4af0 is same with the state(5) to be set 00:42:43.553 [2024-05-15 01:08:46.625585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac4af0 (9): Bad file descriptor 00:42:43.553 [2024-05-15 01:08:46.625612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:43.553 [2024-05-15 01:08:46.625623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:43.553 [2024-05-15 01:08:46.625634] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:43.553 [2024-05-15 01:08:46.625650] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.553 [2024-05-15 01:08:46.628249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:42:43.553 [2024-05-15 01:08:46.628332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.628379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.628395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabfc60 with addr=10.0.0.3, port=4420 00:42:43.553 [2024-05-15 01:08:46.628406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfc60 is same with the state(5) to be set 00:42:43.553 [2024-05-15 01:08:46.628423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabfc60 (9): Bad file descriptor 00:42:43.553 [2024-05-15 01:08:46.628438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:42:43.553 [2024-05-15 01:08:46.628447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:42:43.553 [2024-05-15 01:08:46.628456] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:42:43.553 [2024-05-15 01:08:46.628470] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.553 [2024-05-15 01:08:46.635442] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:43.553 [2024-05-15 01:08:46.635538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.635585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.635617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac4af0 with addr=10.0.0.2, port=4420 00:42:43.553 [2024-05-15 01:08:46.635630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac4af0 is same with the state(5) to be set 00:42:43.553 [2024-05-15 01:08:46.635648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac4af0 (9): Bad file descriptor 00:42:43.553 [2024-05-15 01:08:46.635663] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:43.553 [2024-05-15 01:08:46.635673] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:43.553 [2024-05-15 01:08:46.635683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:43.553 [2024-05-15 01:08:46.635698] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.553 [2024-05-15 01:08:46.638300] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:42:43.553 [2024-05-15 01:08:46.638380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.638430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.638446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabfc60 with addr=10.0.0.3, port=4420 00:42:43.553 [2024-05-15 01:08:46.638457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfc60 is same with the state(5) to be set 00:42:43.553 [2024-05-15 01:08:46.638474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabfc60 (9): Bad file descriptor 00:42:43.553 [2024-05-15 01:08:46.638488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:42:43.553 [2024-05-15 01:08:46.638498] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:42:43.553 [2024-05-15 01:08:46.638507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:42:43.553 [2024-05-15 01:08:46.638521] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.553 [2024-05-15 01:08:46.645505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:43.553 [2024-05-15 01:08:46.645621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.645670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.645687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac4af0 with addr=10.0.0.2, port=4420 00:42:43.553 [2024-05-15 01:08:46.645699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac4af0 is same with the state(5) to be set 00:42:43.553 [2024-05-15 01:08:46.645717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac4af0 (9): Bad file descriptor 00:42:43.553 [2024-05-15 01:08:46.645751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:43.553 [2024-05-15 01:08:46.645762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:43.553 [2024-05-15 01:08:46.645772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:43.553 [2024-05-15 01:08:46.645788] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.553 [2024-05-15 01:08:46.648353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:42:43.553 [2024-05-15 01:08:46.648443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.648490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.553 [2024-05-15 01:08:46.648506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabfc60 with addr=10.0.0.3, port=4420 00:42:43.554 [2024-05-15 01:08:46.648518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfc60 is same with the state(5) to be set 00:42:43.554 [2024-05-15 01:08:46.648536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabfc60 (9): Bad file descriptor 00:42:43.554 [2024-05-15 01:08:46.648550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:42:43.554 [2024-05-15 01:08:46.648559] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:42:43.554 [2024-05-15 01:08:46.648569] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:42:43.554 [2024-05-15 01:08:46.648584] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.554 [2024-05-15 01:08:46.655581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:43.554 [2024-05-15 01:08:46.655765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.554 [2024-05-15 01:08:46.655818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.554 [2024-05-15 01:08:46.655835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac4af0 with addr=10.0.0.2, port=4420 00:42:43.554 [2024-05-15 01:08:46.655848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac4af0 is same with the state(5) to be set 00:42:43.554 [2024-05-15 01:08:46.655869] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac4af0 (9): Bad file descriptor 00:42:43.554 [2024-05-15 01:08:46.655916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:43.554 [2024-05-15 01:08:46.655928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:43.554 [2024-05-15 01:08:46.655941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:43.554 [2024-05-15 01:08:46.655957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.554 [2024-05-15 01:08:46.658409] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:42:43.554 [2024-05-15 01:08:46.658495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.554 [2024-05-15 01:08:46.658541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.554 [2024-05-15 01:08:46.658558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabfc60 with addr=10.0.0.3, port=4420 00:42:43.554 [2024-05-15 01:08:46.658569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabfc60 is same with the state(5) to be set 00:42:43.554 [2024-05-15 01:08:46.658587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabfc60 (9): Bad file descriptor 00:42:43.554 [2024-05-15 01:08:46.658614] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:42:43.554 [2024-05-15 01:08:46.658625] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:42:43.554 [2024-05-15 01:08:46.658635] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:42:43.554 [2024-05-15 01:08:46.658650] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.554 [2024-05-15 01:08:46.664934] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:42:43.554 [2024-05-15 01:08:46.664984] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:42:43.554 [2024-05-15 01:08:46.665024] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:42:43.554 [2024-05-15 01:08:46.665687] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:43.554 [2024-05-15 01:08:46.665806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.554 [2024-05-15 01:08:46.665857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:43.554 [2024-05-15 01:08:46.665874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac4af0 with addr=10.0.0.2, port=4420 00:42:43.554 [2024-05-15 01:08:46.665886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac4af0 is same with the state(5) to be set 00:42:43.554 [2024-05-15 01:08:46.665906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac4af0 (9): Bad file descriptor 00:42:43.554 [2024-05-15 01:08:46.665976] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:42:43.554 [2024-05-15 01:08:46.665996] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:42:43.554 [2024-05-15 01:08:46.666015] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:42:43.554 [2024-05-15 01:08:46.666040] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:43.554 [2024-05-15 01:08:46.666053] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:43.554 [2024-05-15 01:08:46.666064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:43.554 [2024-05-15 01:08:46.666086] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:43.554 [2024-05-15 01:08:46.751101] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:42:43.554 [2024-05-15 01:08:46.752048] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:42:44.490 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:42:44.491 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:44.749 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:42:44.749 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:42:44.749 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:42:44.749 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:42:44.749 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:44.749 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:44.749 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:44.749 01:08:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:42:44.749 [2024-05-15 01:08:47.879952] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:42:45.685 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:45.944 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:42:45.944 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:42:45.944 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:42:45.944 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:45.944 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:45.944 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:42:45.944 01:08:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@649 -- # local es=0 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:45.944 [2024-05-15 01:08:49.057489] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:42:45.944 2024/05/15 01:08:49 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:42:45.944 request: 00:42:45.944 { 00:42:45.944 "method": "bdev_nvme_start_mdns_discovery", 00:42:45.944 "params": { 00:42:45.944 "name": "mdns", 00:42:45.944 "svcname": "_nvme-disc._http", 00:42:45.944 "hostnqn": "nqn.2021-12.io.spdk:test" 00:42:45.944 } 00:42:45.944 } 00:42:45.944 Got JSON-RPC error response 00:42:45.944 GoRPCClient: error on JSON-RPC call 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # es=1 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:42:45.944 01:08:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:42:46.510 [2024-05-15 01:08:49.646179] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:42:46.510 [2024-05-15 01:08:49.746177] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:42:46.769 [2024-05-15 01:08:49.846191] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:42:46.769 [2024-05-15 01:08:49.846254] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:42:46.769 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:42:46.769 cookie is 0 00:42:46.769 is_local: 1 00:42:46.769 our_own: 0 00:42:46.769 wide_area: 0 00:42:46.769 multicast: 1 00:42:46.769 cached: 1 00:42:46.769 [2024-05-15 01:08:49.946186] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:42:46.769 [2024-05-15 01:08:49.946228] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:42:46.769 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:42:46.769 cookie is 0 00:42:46.769 is_local: 1 00:42:46.769 our_own: 0 00:42:46.769 wide_area: 0 00:42:46.769 multicast: 1 00:42:46.769 cached: 1 00:42:46.769 [2024-05-15 01:08:49.946260] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:42:46.769 [2024-05-15 01:08:50.046193] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:42:46.769 [2024-05-15 01:08:50.046232] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:42:46.769 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:42:46.769 cookie is 0 00:42:46.769 is_local: 1 00:42:46.769 our_own: 0 00:42:46.769 wide_area: 0 00:42:46.769 multicast: 1 00:42:46.769 cached: 1 00:42:47.028 [2024-05-15 01:08:50.146182] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:42:47.028 [2024-05-15 01:08:50.146237] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:42:47.028 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:42:47.028 cookie is 0 00:42:47.028 is_local: 1 00:42:47.028 our_own: 0 00:42:47.028 wide_area: 0 00:42:47.028 multicast: 1 00:42:47.028 cached: 1 00:42:47.028 [2024-05-15 01:08:50.146267] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:42:47.595 [2024-05-15 01:08:50.855792] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:42:47.595 [2024-05-15 01:08:50.855833] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:42:47.595 [2024-05-15 01:08:50.855851] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:42:47.854 [2024-05-15 01:08:50.941905] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:42:47.854 [2024-05-15 01:08:51.001131] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:42:47.854 [2024-05-15 01:08:51.001165] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:42:47.854 [2024-05-15 01:08:51.055693] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:42:47.854 [2024-05-15 01:08:51.055717] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:42:47.854 [2024-05-15 01:08:51.055750] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:42:48.128 [2024-05-15 01:08:51.141912] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:42:48.128 [2024-05-15 01:08:51.202334] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:42:48.128 [2024-05-15 01:08:51.202400] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:42:51.414 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:42:51.414 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:42:51.414 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:51.414 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:51.414 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:42:51.414 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:42:51.414 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@649 -- # local es=0 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:51.415 [2024-05-15 01:08:54.243909] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:42:51.415 2024/05/15 01:08:54 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:42:51.415 request: 00:42:51.415 { 00:42:51.415 "method": "bdev_nvme_start_mdns_discovery", 00:42:51.415 "params": { 00:42:51.415 "name": "cdc", 00:42:51.415 "svcname": "_nvme-disc._tcp", 00:42:51.415 "hostnqn": "nqn.2021-12.io.spdk:test" 00:42:51.415 } 00:42:51.415 } 00:42:51.415 Got JSON-RPC error response 00:42:51.415 GoRPCClient: error on JSON-RPC call 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # es=1 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 112358 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 112358 00:42:51.415 [2024-05-15 01:08:54.473184] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 112387 00:42:51.415 Got SIGTERM, quitting. 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:42:51.415 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:42:51.415 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:42:51.415 avahi-daemon 0.8 exiting. 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:51.415 rmmod nvme_tcp 00:42:51.415 rmmod nvme_fabrics 00:42:51.415 rmmod nvme_keyring 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 112308 ']' 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 112308 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@947 -- # '[' -z 112308 ']' 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # kill -0 112308 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # uname 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 112308 00:42:51.415 killing process with pid 112308 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:42:51.415 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 112308' 00:42:51.416 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # kill 112308 00:42:51.416 [2024-05-15 01:08:54.679226] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:42:51.416 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@971 -- # wait 112308 00:42:51.674 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:42:51.674 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:51.674 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:51.674 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:51.674 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:51.674 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:51.674 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:51.674 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:51.674 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:42:51.674 00:42:51.674 real 0m20.552s 00:42:51.674 user 0m40.087s 00:42:51.674 sys 0m2.038s 00:42:51.674 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:42:51.674 01:08:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:51.674 ************************************ 00:42:51.674 END TEST nvmf_mdns_discovery 00:42:51.674 ************************************ 00:42:51.934 01:08:54 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:42:51.934 01:08:54 nvmf_tcp -- nvmf/nvmf.sh@116 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:42:51.934 01:08:54 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:42:51.934 01:08:54 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:42:51.934 01:08:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:51.934 ************************************ 00:42:51.934 START TEST nvmf_host_multipath 00:42:51.934 ************************************ 00:42:51.934 01:08:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:42:51.934 * Looking for test storage... 00:42:51.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:42:51.934 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:42:51.935 Cannot find device "nvmf_tgt_br" 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:42:51.935 Cannot find device "nvmf_tgt_br2" 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:42:51.935 Cannot find device "nvmf_tgt_br" 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:42:51.935 Cannot find device "nvmf_tgt_br2" 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:42:51.935 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:42:52.193 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:52.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:52.194 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:42:52.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:52.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:42:52.194 00:42:52.194 --- 10.0.0.2 ping statistics --- 00:42:52.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:52.194 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:42:52.194 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:52.194 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:42:52.194 00:42:52.194 --- 10.0.0.3 ping statistics --- 00:42:52.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:52.194 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:52.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:52.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:42:52.194 00:42:52.194 --- 10.0.0.1 ping statistics --- 00:42:52.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:52.194 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@721 -- # xtrace_disable 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=112935 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 112935 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@828 -- # '[' -z 112935 ']' 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:52.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:52.194 01:08:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:52.452 [2024-05-15 01:08:55.522300] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:42:52.452 [2024-05-15 01:08:55.522381] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:52.452 [2024-05-15 01:08:55.657798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:52.710 [2024-05-15 01:08:55.756250] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:52.710 [2024-05-15 01:08:55.756306] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:52.710 [2024-05-15 01:08:55.756319] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:52.710 [2024-05-15 01:08:55.756327] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:52.710 [2024-05-15 01:08:55.756335] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:52.710 [2024-05-15 01:08:55.756749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:52.710 [2024-05-15 01:08:55.756760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:53.275 01:08:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:53.275 01:08:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@861 -- # return 0 00:42:53.275 01:08:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:53.275 01:08:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@727 -- # xtrace_disable 00:42:53.275 01:08:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:53.275 01:08:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:53.275 01:08:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=112935 00:42:53.275 01:08:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:53.534 [2024-05-15 01:08:56.794031] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:53.534 01:08:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:42:54.099 Malloc0 00:42:54.099 01:08:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:42:54.358 01:08:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:54.358 01:08:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:54.616 [2024-05-15 01:08:57.835770] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:42:54.616 [2024-05-15 01:08:57.836217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:54.616 01:08:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:54.881 [2024-05-15 01:08:58.064105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:54.881 01:08:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:42:54.881 01:08:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=113039 00:42:54.881 01:08:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:54.881 01:08:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 113039 /var/tmp/bdevperf.sock 00:42:54.881 01:08:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@828 -- # '[' -z 113039 ']' 00:42:54.881 01:08:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:54.881 01:08:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local max_retries=100 00:42:54.881 01:08:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:54.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:54.881 01:08:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@837 -- # xtrace_disable 00:42:54.881 01:08:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:55.821 01:08:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:42:55.821 01:08:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@861 -- # return 0 00:42:55.821 01:08:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:42:56.079 01:08:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:42:56.337 Nvme0n1 00:42:56.596 01:08:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:42:56.854 Nvme0n1 00:42:56.854 01:08:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:42:56.854 01:08:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:42:57.788 01:09:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:42:57.788 01:09:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:42:58.047 01:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:42:58.306 01:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:42:58.306 01:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113122 00:42:58.306 01:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:42:58.306 01:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112935 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:04.889 Attaching 4 probes... 00:43:04.889 @path[10.0.0.2, 4421]: 16882 00:43:04.889 @path[10.0.0.2, 4421]: 17345 00:43:04.889 @path[10.0.0.2, 4421]: 17476 00:43:04.889 @path[10.0.0.2, 4421]: 17121 00:43:04.889 @path[10.0.0.2, 4421]: 17436 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113122 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:43:04.889 01:09:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:43:05.148 01:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:43:05.405 01:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:43:05.405 01:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113254 00:43:05.405 01:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112935 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:43:05.405 01:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:11.967 Attaching 4 probes... 00:43:11.967 @path[10.0.0.2, 4420]: 16777 00:43:11.967 @path[10.0.0.2, 4420]: 17162 00:43:11.967 @path[10.0.0.2, 4420]: 17297 00:43:11.967 @path[10.0.0.2, 4420]: 17441 00:43:11.967 @path[10.0.0.2, 4420]: 17304 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113254 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:43:11.967 01:09:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:43:11.967 01:09:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:43:12.225 01:09:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:43:12.225 01:09:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113389 00:43:12.225 01:09:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:43:12.225 01:09:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112935 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:18.794 Attaching 4 probes... 00:43:18.794 @path[10.0.0.2, 4421]: 12965 00:43:18.794 @path[10.0.0.2, 4421]: 17082 00:43:18.794 @path[10.0.0.2, 4421]: 17282 00:43:18.794 @path[10.0.0.2, 4421]: 17134 00:43:18.794 @path[10.0.0.2, 4421]: 17298 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113389 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:43:18.794 01:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:43:19.054 01:09:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:43:19.054 01:09:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113521 00:43:19.054 01:09:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112935 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:43:19.054 01:09:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:25.621 Attaching 4 probes... 00:43:25.621 00:43:25.621 00:43:25.621 00:43:25.621 00:43:25.621 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113521 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:43:25.621 01:09:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:43:25.879 01:09:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:43:25.879 01:09:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113646 00:43:25.879 01:09:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112935 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:43:25.879 01:09:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:43:32.475 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:43:32.475 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:32.475 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:43:32.475 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:32.475 Attaching 4 probes... 00:43:32.475 @path[10.0.0.2, 4421]: 16230 00:43:32.475 @path[10.0.0.2, 4421]: 16775 00:43:32.475 @path[10.0.0.2, 4421]: 16859 00:43:32.475 @path[10.0.0.2, 4421]: 17081 00:43:32.475 @path[10.0.0.2, 4421]: 16672 00:43:32.475 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:32.475 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:43:32.475 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:32.475 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:43:32.475 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:43:32.475 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:43:32.475 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113646 00:43:32.475 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:32.475 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:43:32.475 [2024-05-15 01:09:35.595705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.595992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.596001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.596010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.596018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.596027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.596039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.596049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.596057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.596065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.596074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.596082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.596091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.475 [2024-05-15 01:09:35.596099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.476 [2024-05-15 01:09:35.596735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 [2024-05-15 01:09:35.596875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a91d80 is same with the state(5) to be set 00:43:32.477 01:09:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:43:33.413 01:09:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:43:33.413 01:09:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113781 00:43:33.414 01:09:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112935 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:43:33.414 01:09:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:43:40.010 01:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:43:40.010 01:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:40.010 01:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:43:40.010 01:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:40.010 Attaching 4 probes... 00:43:40.010 @path[10.0.0.2, 4420]: 16331 00:43:40.010 @path[10.0.0.2, 4420]: 16747 00:43:40.010 @path[10.0.0.2, 4420]: 15906 00:43:40.010 @path[10.0.0.2, 4420]: 15387 00:43:40.010 @path[10.0.0.2, 4420]: 14567 00:43:40.010 01:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:40.010 01:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:43:40.010 01:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:40.010 01:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:43:40.010 01:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:43:40.010 01:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:43:40.010 01:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113781 00:43:40.010 01:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:40.010 01:09:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:43:40.010 [2024-05-15 01:09:43.156460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:43:40.010 01:09:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:43:40.267 01:09:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:43:46.822 01:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:43:46.822 01:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113964 00:43:46.822 01:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:43:46.822 01:09:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112935 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:43:53.394 01:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:53.395 Attaching 4 probes... 00:43:53.395 @path[10.0.0.2, 4421]: 14426 00:43:53.395 @path[10.0.0.2, 4421]: 14345 00:43:53.395 @path[10.0.0.2, 4421]: 14625 00:43:53.395 @path[10.0.0.2, 4421]: 14419 00:43:53.395 @path[10.0.0.2, 4421]: 14494 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113964 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 113039 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@947 -- # '[' -z 113039 ']' 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # kill -0 113039 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # uname 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 113039 00:43:53.395 killing process with pid 113039 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # echo 'killing process with pid 113039' 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # kill 113039 00:43:53.395 01:09:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@971 -- # wait 113039 00:43:53.395 Connection closed with partial response: 00:43:53.395 00:43:53.395 00:43:53.395 01:09:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 113039 00:43:53.395 01:09:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:43:53.395 [2024-05-15 01:08:58.129443] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:43:53.395 [2024-05-15 01:08:58.129545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113039 ] 00:43:53.395 [2024-05-15 01:08:58.267207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:53.395 [2024-05-15 01:08:58.360100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:43:53.395 Running I/O for 90 seconds... 00:43:53.395 [2024-05-15 01:09:08.466464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.466546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.466620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.466643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.466666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.466683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.466704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.466720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.466741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.466756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.466777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.466793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.466814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.466829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.466850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.466864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.466886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.466901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.466922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.466946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.466970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.467004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.467028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.467053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.467074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.467089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.467110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.467125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.467146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.467161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.467184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.467199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.467220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.467235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.467255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.467270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.467291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.467306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.467329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.467344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.467365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.467380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.467401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.395 [2024-05-15 01:09:08.467416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:53.395 [2024-05-15 01:09:08.467437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.467980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.467996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.468018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.468033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.468055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.468070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.468696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.468724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.468751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.468768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.468789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.468805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.468827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.468842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.468863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.468879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.468900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.468916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.468937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.468953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.468974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.468989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.469010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.469036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.469059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.469075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.469097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.469112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.469134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.469148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.469169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.469184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.469206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.469221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.469242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.469257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.469278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.469293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.469314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.396 [2024-05-15 01:09:08.469336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:53.396 [2024-05-15 01:09:08.469358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.469959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.469988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:53.397 [2024-05-15 01:09:08.470639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.397 [2024-05-15 01:09:08.470654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.470675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.470690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.470711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.470727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.470747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.470763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.470784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.470798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.470820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.470835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.470856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.470871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.470892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.470914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.470957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.470974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.470996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.471011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.471048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.471084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.471120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.471172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.471213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.398 [2024-05-15 01:09:08.471250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.471769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.471785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.473563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.473610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.473641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.473660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.473721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.473747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:53.398 [2024-05-15 01:09:08.473769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.398 [2024-05-15 01:09:08.473784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:08.473805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.399 [2024-05-15 01:09:08.473820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:08.473841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.399 [2024-05-15 01:09:08.473856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:08.473877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.399 [2024-05-15 01:09:08.473892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:08.473913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.399 [2024-05-15 01:09:08.473928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:08.473950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.399 [2024-05-15 01:09:08.473965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.063409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.063472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.063532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.063552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.063575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.063590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.063630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.063647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.063668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.063683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.063730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.063748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.063769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.063784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.063805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.063820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.063841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.063856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.063876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.063891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.063911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.063927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.063947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.063963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.063983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.063998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.064018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.064033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.064055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.064070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.064090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.064105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.064126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.064141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.064162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.064185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.064207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.064224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.064245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.064260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.064281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.064296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.064318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.064333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.064354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.064369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.064390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.064405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.066505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.066534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.066565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.066581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.066619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.066638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.066664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.066680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.066705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.399 [2024-05-15 01:09:15.066721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:53.399 [2024-05-15 01:09:15.066747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.066775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.066803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.066818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.066844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.066860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.066892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.066907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.066932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.066960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.066987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.067886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.067902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:53.400 [2024-05-15 01:09:15.068039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.400 [2024-05-15 01:09:15.068063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:15.068111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:15.068154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:15.068198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:15.068242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:15.068901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:15.068916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.207985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:22.208062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.208100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:22.208118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.209203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.401 [2024-05-15 01:09:22.209254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.209284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:22.209301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.209323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:22.209338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.209359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:22.209373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.209394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:22.209409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.209430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:22.209444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.209465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:22.209480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.209501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:22.209515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.209536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:22.209550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.209571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:22.209585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.209622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:22.209639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.209660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:22.209684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:53.401 [2024-05-15 01:09:22.209704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.401 [2024-05-15 01:09:22.209728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.209750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.209766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.209787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.209802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.209823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.209839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.209861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.209876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.210532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.210563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.210588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.210618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.210642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.210657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.210678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.210693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.210713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.210728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.210749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.210764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.210785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.210800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.210821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.210836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.210867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.210883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.210904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.210919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.210952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.210969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.210990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.211981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.402 [2024-05-15 01:09:22.211995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:53.402 [2024-05-15 01:09:22.212016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.212977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.212991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.213015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.213030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.213050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.213064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.213091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.213107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.213127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.213141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.213161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.213175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.213195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.403 [2024-05-15 01:09:22.213210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:53.403 [2024-05-15 01:09:22.213230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.213244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.213264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.213278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.213299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.213313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.213869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.213896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.213920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.213936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.213956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.213970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.213990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.214971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.214992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.215008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:53.404 [2024-05-15 01:09:22.215029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.404 [2024-05-15 01:09:22.215051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.405 [2024-05-15 01:09:22.215676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.405 [2024-05-15 01:09:22.215712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.405 [2024-05-15 01:09:22.215748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.215953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.215969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.216011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.216026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.216046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.216061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.216081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.216095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.216116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.405 [2024-05-15 01:09:22.216130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:53.405 [2024-05-15 01:09:22.216150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.216911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.216927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.217819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.217845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.217871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.217887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.217907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.217939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.217959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.217974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.217995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.218016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.218037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.218052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.218072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.218087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.218108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.218123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.218144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.218159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.218179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.218194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.218215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.218230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.218261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.406 [2024-05-15 01:09:22.218277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:53.406 [2024-05-15 01:09:22.218297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.218336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.218372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.218408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.218443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.218478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.218514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.218564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.218599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.218664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.218703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.218747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.218783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.218819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.218833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:53.407 [2024-05-15 01:09:22.232846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.407 [2024-05-15 01:09:22.232860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.232882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.232897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.233535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.233574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.233622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.233641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.233664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.233680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.233700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.233715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.233735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.233750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.233771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.233785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.233806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.233820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.233841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.233855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.233876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.233891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.233911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.233926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.233946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.233961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.233981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.233996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.408 [2024-05-15 01:09:22.234747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:53.408 [2024-05-15 01:09:22.234767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.234782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.234802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.234817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.234838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.234852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.234873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.234888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.234909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.234924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.234960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.234983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.409 [2024-05-15 01:09:22.235357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.409 [2024-05-15 01:09:22.235408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.409 [2024-05-15 01:09:22.235459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.235976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.235997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.236027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.236048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.236078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.236099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.236129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.236150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.236189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.236211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.236241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.236262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.236291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.236312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:53.409 [2024-05-15 01:09:22.236342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.409 [2024-05-15 01:09:22.236363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.236393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.236414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.236443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.236464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.236494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.236515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.236545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.236565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.236611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.236634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.236665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.236686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.236717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.236738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.236768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.236789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.236831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.236854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.236883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.236905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.236935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.236956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.236986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.237007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.237037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.237058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.237088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.237110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.238413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.238452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.238489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.238511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.238541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.238563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.238612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.238637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.238669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.238691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.238721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.238743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.238773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.238808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.238839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.238861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.238891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.238912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.238957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.238981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.239010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.410 [2024-05-15 01:09:22.239031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:53.410 [2024-05-15 01:09:22.239061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.239957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.239978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:53.411 [2024-05-15 01:09:22.240868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.411 [2024-05-15 01:09:22.240889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.240919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.240941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.241737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.241784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.241820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.241843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.241873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.241894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.241924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.241945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.241975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.241996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.242949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.242983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.243014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.243034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.243064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.243085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.243115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.243136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.243165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.243186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.243216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.243237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.243266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.243287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.243316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.243337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.243367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.243387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.243417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.243437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.243467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.412 [2024-05-15 01:09:22.243488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:53.412 [2024-05-15 01:09:22.243526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.243548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.243577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.243611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.243645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.243666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.243696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.243717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.243747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.243768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.243798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.243819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.243849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.243870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.243899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.243920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.243950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.243970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.413 [2024-05-15 01:09:22.244283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.413 [2024-05-15 01:09:22.244334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.413 [2024-05-15 01:09:22.244385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.244953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.244975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.245004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.245025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.245055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.245075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.245105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.245126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.245156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.245176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.245206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.245227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.245257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.245278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:53.413 [2024-05-15 01:09:22.245308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.413 [2024-05-15 01:09:22.245328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.245357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.245379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.245408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.245437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.245472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.245493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.245523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.245544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.245574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.245609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.245643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.245664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.245694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.245715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.245745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.245766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.245795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.245816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.245846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.245876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.245906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.245926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.245957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.245979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.247974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.247988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.248009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.248024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:53.414 [2024-05-15 01:09:22.248044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.414 [2024-05-15 01:09:22.248059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.248954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.248976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.249000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.249538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.249564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.249588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.249620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.249643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.249659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.249680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.249695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:53.415 [2024-05-15 01:09:22.249728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.415 [2024-05-15 01:09:22.249744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.249765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.249780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.249801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.249816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.249836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.249851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.249872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.249887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.249907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.249922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.249943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.249958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.249979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.249993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.250014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.250035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.250059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.250074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.250095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.250109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.250130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.250145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.250172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.250188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.250209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.250224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.250244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.259796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.259866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.259893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.259923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.259945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.259974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.416 [2024-05-15 01:09:22.260833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:53.416 [2024-05-15 01:09:22.260861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.260882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.260910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.260929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.260958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.260988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.417 [2024-05-15 01:09:22.261388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.417 [2024-05-15 01:09:22.261437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.417 [2024-05-15 01:09:22.261485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.261977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.261996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.262025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.262045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.262074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.262093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.262122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.262142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.262170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.262190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.262230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.262258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.262288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.262308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.262336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.262356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.262385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.262405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.262433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.262453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:53.417 [2024-05-15 01:09:22.262481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.417 [2024-05-15 01:09:22.262501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.262530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.262550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.262588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.262623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.262654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.262675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.262703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.262723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.262751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.262771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.262800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.262819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.262848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.262876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.262907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.262926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.262986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.263008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.263437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.263473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.263534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.263560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.263629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.263655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.263691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.263712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.263747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.263767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.263803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.263822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.263857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.263877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.263912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.263932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.263967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.263987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:53.418 [2024-05-15 01:09:22.264895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.418 [2024-05-15 01:09:22.264915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.264951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.264970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.265950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.265985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.266005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.266040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.266060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.266095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.266115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.266151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.266181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:22.266367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.419 [2024-05-15 01:09:22.266395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:35.597440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.419 [2024-05-15 01:09:35.597483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:35.597509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.419 [2024-05-15 01:09:35.597525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:35.597541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.419 [2024-05-15 01:09:35.597555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:35.597571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.419 [2024-05-15 01:09:35.597585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:35.597613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.419 [2024-05-15 01:09:35.597629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:35.597645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.419 [2024-05-15 01:09:35.597658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:35.597673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.419 [2024-05-15 01:09:35.597687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:35.597702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.419 [2024-05-15 01:09:35.597716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:35.597731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.419 [2024-05-15 01:09:35.597744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:35.597759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.419 [2024-05-15 01:09:35.597772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:35.597787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.419 [2024-05-15 01:09:35.597801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.419 [2024-05-15 01:09:35.597837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.419 [2024-05-15 01:09:35.597852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.597867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.597882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.597896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.597910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.597926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.597940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.597955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.597969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.597984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.597997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.420 [2024-05-15 01:09:35.598739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.420 [2024-05-15 01:09:35.598753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.598767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.598781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.598796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.598810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.598825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.598839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.598853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.598867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.598882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.598896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.598911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.598936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.598953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.598967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.598990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.421 [2024-05-15 01:09:35.599834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.421 [2024-05-15 01:09:35.599848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.599862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.422 [2024-05-15 01:09:35.599876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.599891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.422 [2024-05-15 01:09:35.599905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.599920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.422 [2024-05-15 01:09:35.599934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.599955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.422 [2024-05-15 01:09:35.599969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.599984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:53.422 [2024-05-15 01:09:35.599997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.422 [2024-05-15 01:09:35.600889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.422 [2024-05-15 01:09:35.600903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.600923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.600937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.600952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.600965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.600980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.600994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.601022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.601051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.601083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.601117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.601146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.601175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.601204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.601232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.601261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.601295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.601324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:53.423 [2024-05-15 01:09:35.601353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:53.423 [2024-05-15 01:09:35.601397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:53.423 [2024-05-15 01:09:35.601408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51072 len:8 PRP1 0x0 PRP2 0x0 00:43:53.423 [2024-05-15 01:09:35.601421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:53.423 [2024-05-15 01:09:35.601477] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2064540 was disconnected and freed. reset controller. 00:43:53.423 [2024-05-15 01:09:35.602781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:53.423 [2024-05-15 01:09:35.602862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20471d0 (9): Bad file descriptor 00:43:53.423 [2024-05-15 01:09:35.603022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:53.423 [2024-05-15 01:09:35.603081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:53.423 [2024-05-15 01:09:35.603104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20471d0 with addr=10.0.0.2, port=4421 00:43:53.423 [2024-05-15 01:09:35.603120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20471d0 is same with the state(5) to be set 00:43:53.423 [2024-05-15 01:09:35.603145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20471d0 (9): Bad file descriptor 00:43:53.423 [2024-05-15 01:09:35.603167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:53.423 [2024-05-15 01:09:35.603182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:53.423 [2024-05-15 01:09:35.603196] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:53.423 [2024-05-15 01:09:35.603220] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:53.423 [2024-05-15 01:09:35.603235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:53.423 [2024-05-15 01:09:45.712178] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:53.423 Received shutdown signal, test time was about 55.684924 seconds 00:43:53.423 00:43:53.423 Latency(us) 00:43:53.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:53.423 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:43:53.423 Verification LBA range: start 0x0 length 0x4000 00:43:53.423 Nvme0n1 : 55.68 6975.74 27.25 0.00 0.00 18318.07 618.12 7076934.75 00:43:53.423 =================================================================================================================== 00:43:53.423 Total : 6975.74 27.25 0.00 0.00 18318.07 618.12 7076934.75 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:53.423 rmmod nvme_tcp 00:43:53.423 rmmod nvme_fabrics 00:43:53.423 rmmod nvme_keyring 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 112935 ']' 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 112935 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@947 -- # '[' -z 112935 ']' 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # kill -0 112935 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # uname 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 112935 00:43:53.423 killing process with pid 112935 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # echo 'killing process with pid 112935' 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # kill 112935 00:43:53.423 [2024-05-15 01:09:56.419329] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@971 -- # wait 112935 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:43:53.423 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:53.424 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:53.424 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:53.424 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:53.424 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:53.424 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:53.424 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:53.683 01:09:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:43:53.683 00:43:53.683 real 1m1.704s 00:43:53.683 user 2m55.004s 00:43:53.683 sys 0m13.699s 00:43:53.683 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:43:53.683 01:09:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:53.683 ************************************ 00:43:53.683 END TEST nvmf_host_multipath 00:43:53.683 ************************************ 00:43:53.683 01:09:56 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:43:53.683 01:09:56 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:43:53.683 01:09:56 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:43:53.683 01:09:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:53.683 ************************************ 00:43:53.683 START TEST nvmf_timeout 00:43:53.683 ************************************ 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:43:53.683 * Looking for test storage... 00:43:53.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:53.683 01:09:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:43:53.684 Cannot find device "nvmf_tgt_br" 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:43:53.684 Cannot find device "nvmf_tgt_br2" 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:43:53.684 Cannot find device "nvmf_tgt_br" 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:43:53.684 Cannot find device "nvmf_tgt_br2" 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:53.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:53.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:43:53.684 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:43:53.942 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:53.942 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:53.942 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:53.942 01:09:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:43:53.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:53.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:43:53.942 00:43:53.942 --- 10.0.0.2 ping statistics --- 00:43:53.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:53.942 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:43:53.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:53.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:43:53.942 00:43:53.942 --- 10.0.0.3 ping statistics --- 00:43:53.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:53.942 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:53.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:53.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:43:53.942 00:43:53.942 --- 10.0.0.1 ping statistics --- 00:43:53.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:53.942 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@721 -- # xtrace_disable 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=114290 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 114290 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@828 -- # '[' -z 114290 ']' 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:43:53.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:43:53.942 01:09:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:53.942 [2024-05-15 01:09:57.221656] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:43:53.942 [2024-05-15 01:09:57.221759] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:54.200 [2024-05-15 01:09:57.366265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:54.200 [2024-05-15 01:09:57.467478] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:54.200 [2024-05-15 01:09:57.467536] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:54.200 [2024-05-15 01:09:57.467551] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:54.200 [2024-05-15 01:09:57.467571] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:54.200 [2024-05-15 01:09:57.467580] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:54.200 [2024-05-15 01:09:57.467717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:54.200 [2024-05-15 01:09:57.467910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:55.131 01:09:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:43:55.131 01:09:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@861 -- # return 0 00:43:55.131 01:09:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:55.131 01:09:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@727 -- # xtrace_disable 00:43:55.132 01:09:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:55.132 01:09:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:55.132 01:09:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:55.132 01:09:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:55.388 [2024-05-15 01:09:58.538890] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:55.388 01:09:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:43:55.647 Malloc0 00:43:55.647 01:09:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:55.934 01:09:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:56.192 01:09:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:56.449 [2024-05-15 01:09:59.700026] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:43:56.449 [2024-05-15 01:09:59.700289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:56.449 01:09:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=114381 00:43:56.449 01:09:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:43:56.449 01:09:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 114381 /var/tmp/bdevperf.sock 00:43:56.449 01:09:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@828 -- # '[' -z 114381 ']' 00:43:56.449 01:09:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:56.449 01:09:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:43:56.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:56.450 01:09:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:56.450 01:09:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:43:56.450 01:09:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:56.707 [2024-05-15 01:09:59.769150] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:43:56.707 [2024-05-15 01:09:59.769222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114381 ] 00:43:56.707 [2024-05-15 01:09:59.906697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:56.707 [2024-05-15 01:09:59.990985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:43:57.642 01:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:43:57.642 01:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@861 -- # return 0 00:43:57.642 01:10:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:43:57.900 01:10:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:43:58.159 NVMe0n1 00:43:58.159 01:10:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=114429 00:43:58.159 01:10:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:58.159 01:10:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:43:58.417 Running I/O for 10 seconds... 00:43:59.414 01:10:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:59.414 [2024-05-15 01:10:02.634749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03420 is same with the state(5) to be set 00:43:59.414 [2024-05-15 01:10:02.634813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03420 is same with the state(5) to be set 00:43:59.414 [2024-05-15 01:10:02.634826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03420 is same with the state(5) to be set 00:43:59.414 [2024-05-15 01:10:02.634835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03420 is same with the state(5) to be set 00:43:59.414 [2024-05-15 01:10:02.634846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03420 is same with the state(5) to be set 00:43:59.414 [2024-05-15 01:10:02.634855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03420 is same with the state(5) to be set 00:43:59.414 [2024-05-15 01:10:02.634864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03420 is same with the state(5) to be set 00:43:59.414 [2024-05-15 01:10:02.635573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.414 [2024-05-15 01:10:02.635620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.414 [2024-05-15 01:10:02.635644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.414 [2024-05-15 01:10:02.635655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.414 [2024-05-15 01:10:02.635668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.414 [2024-05-15 01:10:02.635678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.414 [2024-05-15 01:10:02.635690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.414 [2024-05-15 01:10:02.635699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.414 [2024-05-15 01:10:02.635711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.414 [2024-05-15 01:10:02.635720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.414 [2024-05-15 01:10:02.635731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.414 [2024-05-15 01:10:02.635741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.635752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.635761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.635772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.635782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.635793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.635802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.635813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.635823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.635834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.635843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.635854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.635863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.635874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.635884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.635896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.635906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.635917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.635928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.635939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.635949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.635960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.635970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.635981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.635990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.415 [2024-05-15 01:10:02.636463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.415 [2024-05-15 01:10:02.636474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.415 [2024-05-15 01:10:02.636483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.636504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.636524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.636545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.636566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.636587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.636627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.636662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.636683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.636983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.636994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.637004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.637015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.637026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.637037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.637046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.637057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.637066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.637078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.637087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.637099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.637108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.637119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.637129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.637140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.637149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.637159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:59.416 [2024-05-15 01:10:02.637169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.637180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.637189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.637200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.637209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.637220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.637229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.637247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.637257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.416 [2024-05-15 01:10:02.637268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.416 [2024-05-15 01:10:02.637281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:59.417 [2024-05-15 01:10:02.637848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.417 [2024-05-15 01:10:02.637888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76168 len:8 PRP1 0x0 PRP2 0x0 00:43:59.417 [2024-05-15 01:10:02.637898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.417 [2024-05-15 01:10:02.637920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.417 [2024-05-15 01:10:02.637928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76176 len:8 PRP1 0x0 PRP2 0x0 00:43:59.417 [2024-05-15 01:10:02.637938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.417 [2024-05-15 01:10:02.637955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.417 [2024-05-15 01:10:02.637969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76184 len:8 PRP1 0x0 PRP2 0x0 00:43:59.417 [2024-05-15 01:10:02.637978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.637988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.417 [2024-05-15 01:10:02.637995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.417 [2024-05-15 01:10:02.638003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76192 len:8 PRP1 0x0 PRP2 0x0 00:43:59.417 [2024-05-15 01:10:02.638012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.638022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.417 [2024-05-15 01:10:02.638029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.417 [2024-05-15 01:10:02.638037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76200 len:8 PRP1 0x0 PRP2 0x0 00:43:59.417 [2024-05-15 01:10:02.638047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.417 [2024-05-15 01:10:02.638056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76208 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76216 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76224 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76232 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76240 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76248 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76256 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76264 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76272 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76280 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75392 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75400 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75408 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75416 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75424 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75432 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75440 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:59.418 [2024-05-15 01:10:02.638686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:59.418 [2024-05-15 01:10:02.638694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75448 len:8 PRP1 0x0 PRP2 0x0 00:43:59.418 [2024-05-15 01:10:02.638705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638758] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf92d60 was disconnected and freed. reset controller. 00:43:59.418 [2024-05-15 01:10:02.638872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:59.418 [2024-05-15 01:10:02.638890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.638901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:59.418 [2024-05-15 01:10:02.650109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.650143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:59.418 [2024-05-15 01:10:02.650155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.650166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:59.418 [2024-05-15 01:10:02.650175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:59.418 [2024-05-15 01:10:02.650185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf74ae0 is same with the state(5) to be set 00:43:59.418 [2024-05-15 01:10:02.650416] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.418 [2024-05-15 01:10:02.650442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf74ae0 (9): Bad file descriptor 00:43:59.418 [2024-05-15 01:10:02.650569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.419 [2024-05-15 01:10:02.650647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.419 [2024-05-15 01:10:02.650667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf74ae0 with addr=10.0.0.2, port=4420 00:43:59.419 [2024-05-15 01:10:02.650678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf74ae0 is same with the state(5) to be set 00:43:59.419 [2024-05-15 01:10:02.650697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf74ae0 (9): Bad file descriptor 00:43:59.419 [2024-05-15 01:10:02.650714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:59.419 [2024-05-15 01:10:02.650723] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:59.419 [2024-05-15 01:10:02.650734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:59.419 [2024-05-15 01:10:02.650755] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:59.419 [2024-05-15 01:10:02.650767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:59.419 01:10:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:44:01.944 [2024-05-15 01:10:04.651051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.944 [2024-05-15 01:10:04.651159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.944 [2024-05-15 01:10:04.651178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf74ae0 with addr=10.0.0.2, port=4420 00:44:01.944 [2024-05-15 01:10:04.651193] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf74ae0 is same with the state(5) to be set 00:44:01.944 [2024-05-15 01:10:04.651221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf74ae0 (9): Bad file descriptor 00:44:01.944 [2024-05-15 01:10:04.651276] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:01.944 [2024-05-15 01:10:04.651287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:01.944 [2024-05-15 01:10:04.651299] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:01.944 [2024-05-15 01:10:04.651340] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:01.944 [2024-05-15 01:10:04.651352] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.944 01:10:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:44:01.944 01:10:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:44:01.944 01:10:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:44:01.944 01:10:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:44:01.944 01:10:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:44:01.944 01:10:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:44:01.944 01:10:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:44:02.202 01:10:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:44:02.202 01:10:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:44:03.594 [2024-05-15 01:10:06.651527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:03.594 [2024-05-15 01:10:06.651643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:03.594 [2024-05-15 01:10:06.651675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf74ae0 with addr=10.0.0.2, port=4420 00:44:03.594 [2024-05-15 01:10:06.651690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf74ae0 is same with the state(5) to be set 00:44:03.594 [2024-05-15 01:10:06.651721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf74ae0 (9): Bad file descriptor 00:44:03.594 [2024-05-15 01:10:06.651742] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:03.595 [2024-05-15 01:10:06.651752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:03.595 [2024-05-15 01:10:06.651764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:03.595 [2024-05-15 01:10:06.651794] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:03.595 [2024-05-15 01:10:06.651806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:05.494 [2024-05-15 01:10:08.651877] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:06.428 00:44:06.428 Latency(us) 00:44:06.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:06.428 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:44:06.428 Verification LBA range: start 0x0 length 0x4000 00:44:06.428 NVMe0n1 : 8.20 1147.12 4.48 15.61 0.00 110188.51 2249.08 7046430.72 00:44:06.428 =================================================================================================================== 00:44:06.428 Total : 1147.12 4.48 15.61 0.00 110188.51 2249.08 7046430.72 00:44:06.428 0 00:44:07.362 01:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:44:07.362 01:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:44:07.362 01:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:44:07.362 01:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:44:07.362 01:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:44:07.362 01:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:44:07.362 01:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:44:07.621 01:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:44:07.621 01:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 114429 00:44:07.621 01:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 114381 00:44:07.621 01:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@947 -- # '[' -z 114381 ']' 00:44:07.621 01:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # kill -0 114381 00:44:07.621 01:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # uname 00:44:07.621 01:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:44:07.621 01:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 114381 00:44:07.621 01:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:44:07.621 01:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:44:07.621 killing process with pid 114381 00:44:07.621 01:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 114381' 00:44:07.621 01:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # kill 114381 00:44:07.621 Received shutdown signal, test time was about 9.425885 seconds 00:44:07.621 00:44:07.621 Latency(us) 00:44:07.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:07.621 =================================================================================================================== 00:44:07.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:07.621 01:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@971 -- # wait 114381 00:44:07.893 01:10:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:08.154 [2024-05-15 01:10:11.310297] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:08.155 01:10:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=114581 00:44:08.155 01:10:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:44:08.155 01:10:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 114581 /var/tmp/bdevperf.sock 00:44:08.155 01:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@828 -- # '[' -z 114581 ']' 00:44:08.155 01:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:08.155 01:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:44:08.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:08.155 01:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:08.155 01:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:44:08.155 01:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:44:08.155 [2024-05-15 01:10:11.386217] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:44:08.155 [2024-05-15 01:10:11.386319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114581 ] 00:44:08.413 [2024-05-15 01:10:11.525720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:08.413 [2024-05-15 01:10:11.626522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:09.361 01:10:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:44:09.361 01:10:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@861 -- # return 0 00:44:09.361 01:10:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:44:09.361 01:10:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:44:09.926 NVMe0n1 00:44:09.926 01:10:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=114627 00:44:09.926 01:10:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:09.926 01:10:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:44:09.926 Running I/O for 10 seconds... 00:44:10.859 01:10:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:11.121 [2024-05-15 01:10:14.216816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.216904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.216918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.216929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.216942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.216953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.216963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.216974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.216987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.216997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.121 [2024-05-15 01:10:14.217263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.122 [2024-05-15 01:10:14.217824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.217990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.218187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbacbf0 is same with the state(5) to be set 00:44:11.123 [2024-05-15 01:10:14.219997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.123 [2024-05-15 01:10:14.220044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.123 [2024-05-15 01:10:14.220068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.123 [2024-05-15 01:10:14.220080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.123 [2024-05-15 01:10:14.220093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.123 [2024-05-15 01:10:14.220103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.123 [2024-05-15 01:10:14.220115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.123 [2024-05-15 01:10:14.220124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.123 [2024-05-15 01:10:14.220136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.123 [2024-05-15 01:10:14.220145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.123 [2024-05-15 01:10:14.220157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.123 [2024-05-15 01:10:14.220167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.123 [2024-05-15 01:10:14.220178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.123 [2024-05-15 01:10:14.220188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.123 [2024-05-15 01:10:14.220199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.123 [2024-05-15 01:10:14.220208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.123 [2024-05-15 01:10:14.220220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.124 [2024-05-15 01:10:14.220758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.124 [2024-05-15 01:10:14.220770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.125 [2024-05-15 01:10:14.220779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.220791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.220801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.220812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.220822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.220833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.220842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.220853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.220863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.220874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.220884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.220895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.220904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.220915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.220925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.220936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.220946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.220957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.220967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.220978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.220987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.220998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.125 [2024-05-15 01:10:14.221362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.125 [2024-05-15 01:10:14.221374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.221984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.221995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.222005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.222016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.222025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.222036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.222045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.222057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.222066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.222077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.222086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.222097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.222107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.126 [2024-05-15 01:10:14.222117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.126 [2024-05-15 01:10:14.222127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.127 [2024-05-15 01:10:14.222147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.127 [2024-05-15 01:10:14.222167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74528 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74536 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74544 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74552 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74560 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74568 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74576 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74584 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74592 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74600 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74608 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74616 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74624 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74632 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.127 [2024-05-15 01:10:14.222723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74640 len:8 PRP1 0x0 PRP2 0x0 00:44:11.127 [2024-05-15 01:10:14.222732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.127 [2024-05-15 01:10:14.222742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.127 [2024-05-15 01:10:14.222749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.222757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74648 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.222766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.222776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.128 [2024-05-15 01:10:14.222783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.222791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74656 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.222813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.222822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.128 [2024-05-15 01:10:14.222830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.222837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74664 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.222846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.222856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.128 [2024-05-15 01:10:14.222863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.222871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74672 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.222880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.222889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.128 [2024-05-15 01:10:14.222896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.222904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74680 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.222923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.222933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.128 [2024-05-15 01:10:14.222941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.236105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74688 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.236144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.236161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.128 [2024-05-15 01:10:14.236172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.236181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74696 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.236191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.236200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.128 [2024-05-15 01:10:14.236208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.236216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74704 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.236225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.236235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.128 [2024-05-15 01:10:14.236242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.236250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74712 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.236259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.236269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.128 [2024-05-15 01:10:14.236276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.236284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74720 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.236294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.236303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.128 [2024-05-15 01:10:14.236310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.236318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74728 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.236327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.236336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.128 [2024-05-15 01:10:14.236343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.236351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74736 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.236360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.236369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.128 [2024-05-15 01:10:14.236376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.236383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74744 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.236392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.236401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.128 [2024-05-15 01:10:14.236409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.128 [2024-05-15 01:10:14.236416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74752 len:8 PRP1 0x0 PRP2 0x0 00:44:11.128 [2024-05-15 01:10:14.236425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.236491] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d05c40 was disconnected and freed. reset controller. 00:44:11.128 [2024-05-15 01:10:14.236638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:44:11.128 [2024-05-15 01:10:14.236657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.236674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:44:11.128 [2024-05-15 01:10:14.236684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.236694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:44:11.128 [2024-05-15 01:10:14.236704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.236714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:44:11.128 [2024-05-15 01:10:14.236723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.128 [2024-05-15 01:10:14.236733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7ae0 is same with the state(5) to be set 00:44:11.128 [2024-05-15 01:10:14.236953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:11.128 [2024-05-15 01:10:14.236987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce7ae0 (9): Bad file descriptor 00:44:11.128 [2024-05-15 01:10:14.237094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:11.128 [2024-05-15 01:10:14.237156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:11.128 [2024-05-15 01:10:14.237173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce7ae0 with addr=10.0.0.2, port=4420 00:44:11.128 [2024-05-15 01:10:14.237184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7ae0 is same with the state(5) to be set 00:44:11.128 [2024-05-15 01:10:14.237202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce7ae0 (9): Bad file descriptor 00:44:11.128 [2024-05-15 01:10:14.237218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:11.128 [2024-05-15 01:10:14.237228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:11.128 [2024-05-15 01:10:14.237240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:11.128 [2024-05-15 01:10:14.237260] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:11.128 [2024-05-15 01:10:14.237271] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:11.128 01:10:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:44:12.065 [2024-05-15 01:10:15.237428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:12.065 [2024-05-15 01:10:15.237538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:12.065 [2024-05-15 01:10:15.237558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce7ae0 with addr=10.0.0.2, port=4420 00:44:12.065 [2024-05-15 01:10:15.237573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7ae0 is same with the state(5) to be set 00:44:12.065 [2024-05-15 01:10:15.237613] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce7ae0 (9): Bad file descriptor 00:44:12.065 [2024-05-15 01:10:15.237641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:12.065 [2024-05-15 01:10:15.237652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:12.065 [2024-05-15 01:10:15.237663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:12.065 [2024-05-15 01:10:15.237693] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:12.065 [2024-05-15 01:10:15.237705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:12.065 01:10:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:12.324 [2024-05-15 01:10:15.508034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:12.324 01:10:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 114627 00:44:13.258 [2024-05-15 01:10:16.250351] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:19.813 00:44:19.813 Latency(us) 00:44:19.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:19.813 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:44:19.813 Verification LBA range: start 0x0 length 0x4000 00:44:19.813 NVMe0n1 : 10.01 6177.97 24.13 0.00 0.00 20684.93 2040.55 3035150.89 00:44:19.813 =================================================================================================================== 00:44:19.813 Total : 6177.97 24.13 0.00 0.00 20684.93 2040.55 3035150.89 00:44:19.813 0 00:44:19.813 01:10:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=114741 00:44:19.813 01:10:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:19.813 01:10:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:44:20.071 Running I/O for 10 seconds... 00:44:21.004 01:10:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:21.266 [2024-05-15 01:10:24.341464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.341998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.342009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.342020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.342031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.342041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.266 [2024-05-15 01:10:24.342052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.267 [2024-05-15 01:10:24.342062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.267 [2024-05-15 01:10:24.342073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.267 [2024-05-15 01:10:24.342083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.267 [2024-05-15 01:10:24.342094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.267 [2024-05-15 01:10:24.342104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.267 [2024-05-15 01:10:24.342114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.267 [2024-05-15 01:10:24.342125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.267 [2024-05-15 01:10:24.342136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.267 [2024-05-15 01:10:24.342146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03fc0 is same with the state(5) to be set 00:44:21.267 [2024-05-15 01:10:24.343739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.343787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.343811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.343822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.343834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.343844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.343855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.343866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.343877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.343886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.343897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.343907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.343918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.343928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.343939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.343949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.343960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.343969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.343980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.343989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.344010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.344030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.344050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.344071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.344092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.344112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.344134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.344154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.344174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.344195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.344215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.344236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.267 [2024-05-15 01:10:24.344256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.267 [2024-05-15 01:10:24.344277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.267 [2024-05-15 01:10:24.344298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.267 [2024-05-15 01:10:24.344320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.267 [2024-05-15 01:10:24.344341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.267 [2024-05-15 01:10:24.344353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.267 [2024-05-15 01:10:24.344362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.344990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.344999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.345010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.345020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.345032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.345041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.345052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.345061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.345072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.345082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.268 [2024-05-15 01:10:24.345093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.268 [2024-05-15 01:10:24.345103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.269 [2024-05-15 01:10:24.345166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.269 [2024-05-15 01:10:24.345187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.269 [2024-05-15 01:10:24.345208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.269 [2024-05-15 01:10:24.345239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.269 [2024-05-15 01:10:24.345267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.269 [2024-05-15 01:10:24.345288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.269 [2024-05-15 01:10:24.345308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:21.269 [2024-05-15 01:10:24.345329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.269 [2024-05-15 01:10:24.345873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.269 [2024-05-15 01:10:24.345883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.345894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.345903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.345914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.345924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.345935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.345944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.345955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.345965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.345976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.345986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.345997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:21.270 [2024-05-15 01:10:24.346368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:21.270 [2024-05-15 01:10:24.346408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75104 len:8 PRP1 0x0 PRP2 0x0 00:44:21.270 [2024-05-15 01:10:24.346418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:21.270 [2024-05-15 01:10:24.346443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:21.270 [2024-05-15 01:10:24.346455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75112 len:8 PRP1 0x0 PRP2 0x0 00:44:21.270 [2024-05-15 01:10:24.346464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:21.270 [2024-05-15 01:10:24.346482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:21.270 [2024-05-15 01:10:24.346489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75120 len:8 PRP1 0x0 PRP2 0x0 00:44:21.270 [2024-05-15 01:10:24.346499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:21.270 [2024-05-15 01:10:24.346515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:21.270 [2024-05-15 01:10:24.346523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75128 len:8 PRP1 0x0 PRP2 0x0 00:44:21.270 [2024-05-15 01:10:24.346532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:21.270 [2024-05-15 01:10:24.346555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:21.270 [2024-05-15 01:10:24.346563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75136 len:8 PRP1 0x0 PRP2 0x0 00:44:21.270 [2024-05-15 01:10:24.346572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:21.270 [2024-05-15 01:10:24.346589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:21.270 [2024-05-15 01:10:24.346608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75144 len:8 PRP1 0x0 PRP2 0x0 00:44:21.270 [2024-05-15 01:10:24.346619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:21.270 [2024-05-15 01:10:24.346636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:21.270 [2024-05-15 01:10:24.346644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75152 len:8 PRP1 0x0 PRP2 0x0 00:44:21.270 [2024-05-15 01:10:24.346659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.270 [2024-05-15 01:10:24.346669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:21.270 [2024-05-15 01:10:24.346681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:21.270 [2024-05-15 01:10:24.346696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75160 len:8 PRP1 0x0 PRP2 0x0 00:44:21.271 [2024-05-15 01:10:24.346708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.271 [2024-05-15 01:10:24.346718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:21.271 [2024-05-15 01:10:24.346725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:21.271 [2024-05-15 01:10:24.346734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75168 len:8 PRP1 0x0 PRP2 0x0 00:44:21.271 [2024-05-15 01:10:24.346744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.271 [2024-05-15 01:10:24.346796] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d08450 was disconnected and freed. reset controller. 00:44:21.271 [2024-05-15 01:10:24.346874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:44:21.271 [2024-05-15 01:10:24.346890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.271 [2024-05-15 01:10:24.346901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:44:21.271 [2024-05-15 01:10:24.346927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.271 [2024-05-15 01:10:24.346941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:44:21.271 [2024-05-15 01:10:24.359227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.271 [2024-05-15 01:10:24.359281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:44:21.271 [2024-05-15 01:10:24.359301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:21.271 [2024-05-15 01:10:24.359317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7ae0 is same with the state(5) to be set 00:44:21.271 [2024-05-15 01:10:24.359725] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:21.271 [2024-05-15 01:10:24.359769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce7ae0 (9): Bad file descriptor 00:44:21.271 [2024-05-15 01:10:24.359912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-05-15 01:10:24.359987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:21.271 [2024-05-15 01:10:24.360017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce7ae0 with addr=10.0.0.2, port=4420 00:44:21.271 [2024-05-15 01:10:24.360031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7ae0 is same with the state(5) to be set 00:44:21.271 [2024-05-15 01:10:24.360054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce7ae0 (9): Bad file descriptor 00:44:21.271 [2024-05-15 01:10:24.360074] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:21.271 [2024-05-15 01:10:24.360086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:21.271 [2024-05-15 01:10:24.360099] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:21.271 [2024-05-15 01:10:24.360125] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:21.271 [2024-05-15 01:10:24.360138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:21.271 01:10:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:44:22.206 [2024-05-15 01:10:25.360276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.206 [2024-05-15 01:10:25.360366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:22.206 [2024-05-15 01:10:25.360385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce7ae0 with addr=10.0.0.2, port=4420 00:44:22.206 [2024-05-15 01:10:25.360400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7ae0 is same with the state(5) to be set 00:44:22.206 [2024-05-15 01:10:25.360426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce7ae0 (9): Bad file descriptor 00:44:22.206 [2024-05-15 01:10:25.360446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:22.206 [2024-05-15 01:10:25.360455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:22.206 [2024-05-15 01:10:25.360466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:22.206 [2024-05-15 01:10:25.360495] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:22.206 [2024-05-15 01:10:25.360507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:23.140 [2024-05-15 01:10:26.360699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:23.141 [2024-05-15 01:10:26.360830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:23.141 [2024-05-15 01:10:26.360852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce7ae0 with addr=10.0.0.2, port=4420 00:44:23.141 [2024-05-15 01:10:26.360867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7ae0 is same with the state(5) to be set 00:44:23.141 [2024-05-15 01:10:26.360899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce7ae0 (9): Bad file descriptor 00:44:23.141 [2024-05-15 01:10:26.360926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:23.141 [2024-05-15 01:10:26.360937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:23.141 [2024-05-15 01:10:26.360949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:23.141 [2024-05-15 01:10:26.360992] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:23.141 [2024-05-15 01:10:26.361004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:24.088 [2024-05-15 01:10:27.364546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:24.088 [2024-05-15 01:10:27.364665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:24.088 [2024-05-15 01:10:27.364686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce7ae0 with addr=10.0.0.2, port=4420 00:44:24.088 [2024-05-15 01:10:27.364701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7ae0 is same with the state(5) to be set 00:44:24.088 [2024-05-15 01:10:27.364957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce7ae0 (9): Bad file descriptor 00:44:24.088 [2024-05-15 01:10:27.365221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:24.088 [2024-05-15 01:10:27.365243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:24.088 [2024-05-15 01:10:27.365256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:24.088 01:10:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:24.088 [2024-05-15 01:10:27.369163] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:24.088 [2024-05-15 01:10:27.369189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:24.345 [2024-05-15 01:10:27.590840] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:24.345 01:10:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 114741 00:44:25.281 [2024-05-15 01:10:28.407811] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:30.547 00:44:30.547 Latency(us) 00:44:30.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:30.547 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:44:30.547 Verification LBA range: start 0x0 length 0x4000 00:44:30.547 NVMe0n1 : 10.01 5311.11 20.75 3641.68 0.00 14270.84 711.21 3035150.89 00:44:30.547 =================================================================================================================== 00:44:30.547 Total : 5311.11 20.75 3641.68 0.00 14270.84 0.00 3035150.89 00:44:30.547 0 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 114581 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@947 -- # '[' -z 114581 ']' 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # kill -0 114581 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # uname 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 114581 00:44:30.547 killing process with pid 114581 00:44:30.547 Received shutdown signal, test time was about 10.000000 seconds 00:44:30.547 00:44:30.547 Latency(us) 00:44:30.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:30.547 =================================================================================================================== 00:44:30.547 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 114581' 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # kill 114581 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@971 -- # wait 114581 00:44:30.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=114866 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 114866 /var/tmp/bdevperf.sock 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@828 -- # '[' -z 114866 ']' 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:44:30.547 01:10:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:44:30.547 [2024-05-15 01:10:33.481896] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:44:30.547 [2024-05-15 01:10:33.482153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114866 ] 00:44:30.547 [2024-05-15 01:10:33.622150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:30.547 [2024-05-15 01:10:33.699485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:31.482 01:10:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:44:31.482 01:10:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@861 -- # return 0 00:44:31.482 01:10:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=114891 00:44:31.482 01:10:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:44:31.482 01:10:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 114866 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:44:31.739 01:10:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:44:31.997 NVMe0n1 00:44:31.997 01:10:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=114944 00:44:31.997 01:10:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:31.997 01:10:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:44:32.254 Running I/O for 10 seconds... 00:44:33.188 01:10:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:33.450 [2024-05-15 01:10:36.507276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.450 [2024-05-15 01:10:36.507922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.507931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.507939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.507949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.507958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.507967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.507976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.507985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.507993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa078b0 is same with the state(5) to be set 00:44:33.451 [2024-05-15 01:10:36.508920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.451 [2024-05-15 01:10:36.508951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.451 [2024-05-15 01:10:36.508974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.451 [2024-05-15 01:10:36.508985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.451 [2024-05-15 01:10:36.509002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.451 [2024-05-15 01:10:36.509011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.451 [2024-05-15 01:10:36.509022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.451 [2024-05-15 01:10:36.509032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.451 [2024-05-15 01:10:36.509043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.451 [2024-05-15 01:10:36.509053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.451 [2024-05-15 01:10:36.509065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.451 [2024-05-15 01:10:36.509074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.451 [2024-05-15 01:10:36.509085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.451 [2024-05-15 01:10:36.509094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.452 [2024-05-15 01:10:36.509826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.452 [2024-05-15 01:10:36.509837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.509847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.509858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.509868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.509879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.509888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.509899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.509908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.509919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.509928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.509939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.509948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.509959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.509969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.509980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.509989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.453 [2024-05-15 01:10:36.510560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.453 [2024-05-15 01:10:36.510569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.510985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.510999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.454 [2024-05-15 01:10:36.511422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.454 [2024-05-15 01:10:36.511432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:33.455 [2024-05-15 01:10:36.511740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:33.455 [2024-05-15 01:10:36.511777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:33.455 [2024-05-15 01:10:36.511785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21520 len:8 PRP1 0x0 PRP2 0x0 00:44:33.455 [2024-05-15 01:10:36.511794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:33.455 [2024-05-15 01:10:36.511849] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b6bd60 was disconnected and freed. reset controller. 00:44:33.455 [2024-05-15 01:10:36.512111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:33.455 [2024-05-15 01:10:36.512193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4dae0 (9): Bad file descriptor 00:44:33.455 [2024-05-15 01:10:36.512307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:33.455 [2024-05-15 01:10:36.512360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:33.455 [2024-05-15 01:10:36.512377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4dae0 with addr=10.0.0.2, port=4420 00:44:33.455 [2024-05-15 01:10:36.512388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4dae0 is same with the state(5) to be set 00:44:33.455 [2024-05-15 01:10:36.512406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4dae0 (9): Bad file descriptor 00:44:33.455 [2024-05-15 01:10:36.512423] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:33.455 [2024-05-15 01:10:36.512432] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:33.455 [2024-05-15 01:10:36.512443] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:33.455 [2024-05-15 01:10:36.512463] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:33.455 [2024-05-15 01:10:36.512474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:33.455 01:10:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 114944 00:44:35.354 [2024-05-15 01:10:38.512702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:35.354 [2024-05-15 01:10:38.512787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:35.354 [2024-05-15 01:10:38.512808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4dae0 with addr=10.0.0.2, port=4420 00:44:35.354 [2024-05-15 01:10:38.512823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4dae0 is same with the state(5) to be set 00:44:35.354 [2024-05-15 01:10:38.512850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4dae0 (9): Bad file descriptor 00:44:35.354 [2024-05-15 01:10:38.512872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:35.354 [2024-05-15 01:10:38.512883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:35.354 [2024-05-15 01:10:38.512894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:35.354 [2024-05-15 01:10:38.512922] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:35.354 [2024-05-15 01:10:38.512934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:37.254 [2024-05-15 01:10:40.513154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:37.255 [2024-05-15 01:10:40.513251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:37.255 [2024-05-15 01:10:40.513270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b4dae0 with addr=10.0.0.2, port=4420 00:44:37.255 [2024-05-15 01:10:40.513285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4dae0 is same with the state(5) to be set 00:44:37.255 [2024-05-15 01:10:40.513313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4dae0 (9): Bad file descriptor 00:44:37.255 [2024-05-15 01:10:40.513334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:37.255 [2024-05-15 01:10:40.513344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:37.255 [2024-05-15 01:10:40.513356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:37.255 [2024-05-15 01:10:40.513385] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:37.255 [2024-05-15 01:10:40.513398] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:39.832 [2024-05-15 01:10:42.513522] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:40.398 00:44:40.398 Latency(us) 00:44:40.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:40.398 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:44:40.398 NVMe0n1 : 8.16 2568.30 10.03 15.69 0.00 49491.75 3515.11 7015926.69 00:44:40.398 =================================================================================================================== 00:44:40.398 Total : 2568.30 10.03 15.69 0.00 49491.75 3515.11 7015926.69 00:44:40.398 0 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:40.398 Attaching 5 probes... 00:44:40.398 1443.619640: reset bdev controller NVMe0 00:44:40.398 1443.752775: reconnect bdev controller NVMe0 00:44:40.398 3444.065460: reconnect delay bdev controller NVMe0 00:44:40.398 3444.087474: reconnect bdev controller NVMe0 00:44:40.398 5444.533119: reconnect delay bdev controller NVMe0 00:44:40.398 5444.556692: reconnect bdev controller NVMe0 00:44:40.398 7444.989345: reconnect delay bdev controller NVMe0 00:44:40.398 7445.018770: reconnect bdev controller NVMe0 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 114891 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 114866 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@947 -- # '[' -z 114866 ']' 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # kill -0 114866 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # uname 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 114866 00:44:40.398 killing process with pid 114866 00:44:40.398 Received shutdown signal, test time was about 8.223462 seconds 00:44:40.398 00:44:40.398 Latency(us) 00:44:40.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:40.398 =================================================================================================================== 00:44:40.398 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 114866' 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # kill 114866 00:44:40.398 01:10:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@971 -- # wait 114866 00:44:40.713 01:10:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:44:40.971 rmmod nvme_tcp 00:44:40.971 rmmod nvme_fabrics 00:44:40.971 rmmod nvme_keyring 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 114290 ']' 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 114290 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@947 -- # '[' -z 114290 ']' 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # kill -0 114290 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # uname 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 114290 00:44:40.971 killing process with pid 114290 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 114290' 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # kill 114290 00:44:40.971 [2024-05-15 01:10:44.153932] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:44:40.971 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@971 -- # wait 114290 00:44:41.229 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:44:41.229 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:44:41.229 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:44:41.229 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:41.229 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:44:41.229 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:41.229 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:41.229 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:41.229 01:10:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:44:41.229 00:44:41.229 real 0m47.688s 00:44:41.229 user 2m21.058s 00:44:41.229 sys 0m5.023s 00:44:41.229 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # xtrace_disable 00:44:41.229 01:10:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:44:41.229 ************************************ 00:44:41.229 END TEST nvmf_timeout 00:44:41.229 ************************************ 00:44:41.229 01:10:44 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:44:41.229 01:10:44 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:44:41.229 01:10:44 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:44:41.229 01:10:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:41.229 01:10:44 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:44:41.229 00:44:41.229 real 21m36.042s 00:44:41.229 user 64m56.998s 00:44:41.229 sys 4m28.149s 00:44:41.229 01:10:44 nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:44:41.229 ************************************ 00:44:41.229 END TEST nvmf_tcp 00:44:41.229 ************************************ 00:44:41.229 01:10:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:41.488 01:10:44 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:44:41.488 01:10:44 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:41.488 01:10:44 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:44:41.488 01:10:44 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:44:41.488 01:10:44 -- common/autotest_common.sh@10 -- # set +x 00:44:41.488 ************************************ 00:44:41.488 START TEST spdkcli_nvmf_tcp 00:44:41.488 ************************************ 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:41.488 * Looking for test storage... 00:44:41.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:41.488 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=115162 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 115162 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # '[' -z 115162 ']' 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:41.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:44:41.489 01:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:41.489 [2024-05-15 01:10:44.719009] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:44:41.489 [2024-05-15 01:10:44.719092] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115162 ] 00:44:41.747 [2024-05-15 01:10:44.853318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:41.747 [2024-05-15 01:10:44.947444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:41.747 [2024-05-15 01:10:44.947455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:42.682 01:10:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:44:42.682 01:10:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@861 -- # return 0 00:44:42.682 01:10:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:44:42.682 01:10:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:44:42.682 01:10:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:42.682 01:10:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:44:42.682 01:10:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:44:42.682 01:10:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:44:42.683 01:10:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:44:42.683 01:10:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:42.683 01:10:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:44:42.683 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:44:42.683 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:44:42.683 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:44:42.683 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:44:42.683 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:44:42.683 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:44:42.683 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:42.683 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:42.683 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:44:42.683 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:44:42.683 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:44:42.683 ' 00:44:45.226 [2024-05-15 01:10:48.433132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:46.601 [2024-05-15 01:10:49.713912] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:44:46.601 [2024-05-15 01:10:49.714235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:49.132 [2024-05-15 01:10:52.063679] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:51.046 [2024-05-15 01:10:54.109057] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:52.421 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:52.421 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:52.421 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:52.421 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:52.421 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:52.421 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:52.421 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:52.422 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:52.422 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:52.422 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:52.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:52.422 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:52.680 01:10:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:52.680 01:10:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:44:52.680 01:10:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:52.680 01:10:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:52.680 01:10:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:44:52.680 01:10:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:52.680 01:10:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:52.680 01:10:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:44:53.246 01:10:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:53.246 01:10:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:53.246 01:10:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:53.246 01:10:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:44:53.246 01:10:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:53.246 01:10:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:53.246 01:10:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:44:53.246 01:10:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:53.246 01:10:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:53.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:53.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:53.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:53.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:53.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:53.246 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:53.246 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:53.246 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:53.246 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:53.246 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:53.246 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:53.246 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:53.246 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:53.246 ' 00:44:58.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:44:58.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:44:58.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:58.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:44:58.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:44:58.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:44:58.530 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:44:58.530 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:58.530 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:44:58.530 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:44:58.530 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:44:58.530 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:44:58.530 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:44:58.530 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:44:58.530 01:11:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:44:58.530 01:11:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:44:58.530 01:11:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:58.530 01:11:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 115162 00:44:58.530 01:11:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 115162 ']' 00:44:58.530 01:11:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 115162 00:44:58.530 01:11:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # uname 00:44:58.530 01:11:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:44:58.530 01:11:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 115162 00:44:58.530 01:11:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:44:58.530 killing process with pid 115162 00:44:58.530 01:11:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:44:58.530 01:11:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 115162' 00:44:58.788 01:11:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # kill 115162 00:44:58.788 [2024-05-15 01:11:01.816888] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:44:58.788 01:11:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # wait 115162 00:44:58.788 01:11:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:44:58.788 01:11:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:44:58.789 01:11:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 115162 ']' 00:44:58.789 01:11:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 115162 00:44:58.789 01:11:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 115162 ']' 00:44:58.789 01:11:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 115162 00:44:58.789 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (115162) - No such process 00:44:58.789 Process with pid 115162 is not found 00:44:58.789 01:11:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # echo 'Process with pid 115162 is not found' 00:44:58.789 01:11:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:58.789 01:11:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:58.789 01:11:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:58.789 00:44:58.789 real 0m17.477s 00:44:58.789 user 0m37.711s 00:44:58.789 sys 0m0.960s 00:44:58.789 01:11:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:44:58.789 ************************************ 00:44:58.789 01:11:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:58.789 END TEST spdkcli_nvmf_tcp 00:44:58.789 ************************************ 00:44:58.789 01:11:02 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:58.789 01:11:02 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:44:58.789 01:11:02 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:44:58.789 01:11:02 -- common/autotest_common.sh@10 -- # set +x 00:44:59.048 ************************************ 00:44:59.048 START TEST nvmf_identify_passthru 00:44:59.048 ************************************ 00:44:59.048 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:59.048 * Looking for test storage... 00:44:59.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:59.048 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:59.048 01:11:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:59.048 01:11:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:59.048 01:11:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:59.048 01:11:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:59.048 01:11:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:59.048 01:11:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:59.048 01:11:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:59.048 01:11:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:59.048 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:59.048 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:59.048 01:11:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:59.048 01:11:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:59.048 01:11:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:59.048 01:11:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:59.048 01:11:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:59.049 01:11:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:59.049 01:11:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:59.049 01:11:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:59.049 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:59.049 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:59.049 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:44:59.049 Cannot find device "nvmf_tgt_br" 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:44:59.049 Cannot find device "nvmf_tgt_br2" 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:44:59.049 Cannot find device "nvmf_tgt_br" 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:44:59.049 Cannot find device "nvmf_tgt_br2" 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:59.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:59.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:59.049 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:59.306 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:59.306 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:44:59.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:59.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:44:59.307 00:44:59.307 --- 10.0.0.2 ping statistics --- 00:44:59.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:59.307 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:44:59.307 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:59.307 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:44:59.307 00:44:59.307 --- 10.0.0.3 ping statistics --- 00:44:59.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:59.307 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:59.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:59.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:44:59.307 00:44:59.307 --- 10.0.0.1 ping statistics --- 00:44:59.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:59.307 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:59.307 01:11:02 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:59.307 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:59.307 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:44:59.307 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:59.307 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:59.307 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=() 00:44:59.307 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # local bdfs 00:44:59.307 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=($(get_nvme_bdfs)) 00:44:59.307 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # get_nvme_bdfs 00:44:59.307 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=() 00:44:59.307 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # local bdfs 00:44:59.307 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:59.307 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:59.307 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:44:59.565 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:44:59.565 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:44:59.565 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # echo 0000:00:10.0 00:44:59.565 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:44:59.565 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:44:59.565 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:44:59.565 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:59.565 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:59.565 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:44:59.565 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:44:59.565 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:59.565 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:59.822 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:44:59.822 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:44:59.822 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:44:59.822 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:59.822 01:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:44:59.822 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:44:59.822 01:11:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:59.822 01:11:03 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=115646 00:44:59.822 01:11:03 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:59.822 01:11:03 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:59.822 01:11:03 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 115646 00:44:59.822 01:11:03 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # '[' -z 115646 ']' 00:44:59.822 01:11:03 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:59.822 01:11:03 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local max_retries=100 00:44:59.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:59.822 01:11:03 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:59.822 01:11:03 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # xtrace_disable 00:44:59.822 01:11:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:59.822 [2024-05-15 01:11:03.052231] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:44:59.822 [2024-05-15 01:11:03.052334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:00.081 [2024-05-15 01:11:03.192462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:00.081 [2024-05-15 01:11:03.296983] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:00.081 [2024-05-15 01:11:03.297400] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:00.081 [2024-05-15 01:11:03.297661] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:00.081 [2024-05-15 01:11:03.298016] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:00.081 [2024-05-15 01:11:03.298224] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:00.081 [2024-05-15 01:11:03.298630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:00.081 [2024-05-15 01:11:03.298733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:45:00.081 [2024-05-15 01:11:03.298808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:45:00.081 [2024-05-15 01:11:03.298817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:01.014 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:45:01.014 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@861 -- # return 0 00:45:01.014 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:45:01.014 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:01.015 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.015 [2024-05-15 01:11:04.146864] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:01.015 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.015 [2024-05-15 01:11:04.160889] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:01.015 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.015 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.015 Nvme0n1 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:01.015 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:01.015 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:01.015 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:01.015 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.274 [2024-05-15 01:11:04.303499] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:45:01.274 [2024-05-15 01:11:04.303843] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:01.274 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:01.274 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:45:01.274 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:01.274 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.274 [ 00:45:01.274 { 00:45:01.274 "allow_any_host": true, 00:45:01.274 "hosts": [], 00:45:01.274 "listen_addresses": [], 00:45:01.274 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:45:01.274 "subtype": "Discovery" 00:45:01.274 }, 00:45:01.274 { 00:45:01.274 "allow_any_host": true, 00:45:01.274 "hosts": [], 00:45:01.274 "listen_addresses": [ 00:45:01.274 { 00:45:01.274 "adrfam": "IPv4", 00:45:01.274 "traddr": "10.0.0.2", 00:45:01.274 "trsvcid": "4420", 00:45:01.274 "trtype": "TCP" 00:45:01.274 } 00:45:01.274 ], 00:45:01.274 "max_cntlid": 65519, 00:45:01.274 "max_namespaces": 1, 00:45:01.274 "min_cntlid": 1, 00:45:01.274 "model_number": "SPDK bdev Controller", 00:45:01.274 "namespaces": [ 00:45:01.274 { 00:45:01.274 "bdev_name": "Nvme0n1", 00:45:01.274 "name": "Nvme0n1", 00:45:01.274 "nguid": "CED8B5F31A4B474D95B4769E1512C3BD", 00:45:01.274 "nsid": 1, 00:45:01.274 "uuid": "ced8b5f3-1a4b-474d-95b4-769e1512c3bd" 00:45:01.274 } 00:45:01.274 ], 00:45:01.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:01.274 "serial_number": "SPDK00000000000001", 00:45:01.274 "subtype": "NVMe" 00:45:01.274 } 00:45:01.274 ] 00:45:01.274 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:01.274 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:45:01.274 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:01.274 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:45:01.274 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:45:01.274 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:45:01.274 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:45:01.274 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:01.532 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:45:01.532 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:45:01.532 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:45:01.532 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:01.532 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:01.532 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.532 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:01.532 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:45:01.532 01:11:04 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:45:01.532 01:11:04 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:01.532 01:11:04 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:45:01.791 01:11:04 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:01.791 01:11:04 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:45:01.791 01:11:04 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:01.791 01:11:04 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:01.791 rmmod nvme_tcp 00:45:01.791 rmmod nvme_fabrics 00:45:01.791 rmmod nvme_keyring 00:45:01.791 01:11:04 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:01.791 01:11:04 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:45:01.791 01:11:04 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:45:01.791 01:11:04 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 115646 ']' 00:45:01.791 01:11:04 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 115646 00:45:01.791 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' -z 115646 ']' 00:45:01.791 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # kill -0 115646 00:45:01.791 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # uname 00:45:01.791 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:45:01.791 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 115646 00:45:01.791 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:45:01.791 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:45:01.791 killing process with pid 115646 00:45:01.791 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # echo 'killing process with pid 115646' 00:45:01.791 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # kill 115646 00:45:01.791 [2024-05-15 01:11:04.912561] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:45:01.791 01:11:04 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # wait 115646 00:45:02.051 01:11:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:02.051 01:11:05 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:02.051 01:11:05 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:02.051 01:11:05 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:02.051 01:11:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:02.051 01:11:05 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:02.051 01:11:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:02.051 01:11:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:02.051 01:11:05 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:45:02.051 00:45:02.051 real 0m3.092s 00:45:02.051 user 0m7.711s 00:45:02.051 sys 0m0.821s 00:45:02.051 01:11:05 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # xtrace_disable 00:45:02.051 01:11:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:02.051 ************************************ 00:45:02.051 END TEST nvmf_identify_passthru 00:45:02.051 ************************************ 00:45:02.051 01:11:05 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:45:02.051 01:11:05 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:45:02.051 01:11:05 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:45:02.051 01:11:05 -- common/autotest_common.sh@10 -- # set +x 00:45:02.051 ************************************ 00:45:02.051 START TEST nvmf_dif 00:45:02.051 ************************************ 00:45:02.051 01:11:05 nvmf_dif -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:45:02.051 * Looking for test storage... 00:45:02.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:02.051 01:11:05 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:02.051 01:11:05 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:02.051 01:11:05 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:02.051 01:11:05 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:02.051 01:11:05 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:02.052 01:11:05 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.052 01:11:05 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.052 01:11:05 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.052 01:11:05 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:45:02.052 01:11:05 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:45:02.052 01:11:05 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:45:02.052 01:11:05 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:45:02.052 01:11:05 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:45:02.052 01:11:05 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:45:02.052 01:11:05 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:02.052 01:11:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:02.052 01:11:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:02.052 01:11:05 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:45:02.309 01:11:05 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:45:02.309 Cannot find device "nvmf_tgt_br" 00:45:02.309 01:11:05 nvmf_dif -- nvmf/common.sh@155 -- # true 00:45:02.309 01:11:05 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:45:02.309 Cannot find device "nvmf_tgt_br2" 00:45:02.309 01:11:05 nvmf_dif -- nvmf/common.sh@156 -- # true 00:45:02.309 01:11:05 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:45:02.309 01:11:05 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:45:02.309 Cannot find device "nvmf_tgt_br" 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@158 -- # true 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:45:02.310 Cannot find device "nvmf_tgt_br2" 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@159 -- # true 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:02.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@162 -- # true 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:02.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@163 -- # true 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:02.310 01:11:05 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:45:02.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:02.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:45:02.567 00:45:02.567 --- 10.0.0.2 ping statistics --- 00:45:02.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:02.567 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:45:02.567 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:02.567 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.142 ms 00:45:02.567 00:45:02.567 --- 10.0.0.3 ping statistics --- 00:45:02.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:02.567 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:02.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:02.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:45:02.567 00:45:02.567 --- 10.0.0.1 ping statistics --- 00:45:02.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:02.567 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:45:02.567 01:11:05 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:45:02.824 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:02.824 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:45:02.824 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:45:02.824 01:11:06 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:02.824 01:11:06 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:45:02.824 01:11:06 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:45:02.824 01:11:06 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:02.824 01:11:06 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:45:02.825 01:11:06 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:45:02.825 01:11:06 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:45:02.825 01:11:06 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:45:02.825 01:11:06 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:02.825 01:11:06 nvmf_dif -- common/autotest_common.sh@721 -- # xtrace_disable 00:45:02.825 01:11:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:03.082 01:11:06 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=115995 00:45:03.082 01:11:06 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:45:03.082 01:11:06 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 115995 00:45:03.082 01:11:06 nvmf_dif -- common/autotest_common.sh@828 -- # '[' -z 115995 ']' 00:45:03.082 01:11:06 nvmf_dif -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:03.082 01:11:06 nvmf_dif -- common/autotest_common.sh@833 -- # local max_retries=100 00:45:03.082 01:11:06 nvmf_dif -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:03.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:03.082 01:11:06 nvmf_dif -- common/autotest_common.sh@837 -- # xtrace_disable 00:45:03.082 01:11:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:03.082 [2024-05-15 01:11:06.163496] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:45:03.082 [2024-05-15 01:11:06.163579] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:03.082 [2024-05-15 01:11:06.303335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:03.339 [2024-05-15 01:11:06.426339] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:03.339 [2024-05-15 01:11:06.426415] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:03.339 [2024-05-15 01:11:06.426429] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:03.339 [2024-05-15 01:11:06.426440] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:03.339 [2024-05-15 01:11:06.426450] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:03.339 [2024-05-15 01:11:06.426482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:03.905 01:11:07 nvmf_dif -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:45:03.905 01:11:07 nvmf_dif -- common/autotest_common.sh@861 -- # return 0 00:45:03.905 01:11:07 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:03.905 01:11:07 nvmf_dif -- common/autotest_common.sh@727 -- # xtrace_disable 00:45:03.905 01:11:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:04.163 01:11:07 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:04.163 01:11:07 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:45:04.163 01:11:07 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:45:04.163 01:11:07 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:04.163 01:11:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:04.163 [2024-05-15 01:11:07.238696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:04.163 01:11:07 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:04.163 01:11:07 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:45:04.163 01:11:07 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:45:04.163 01:11:07 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:45:04.163 01:11:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:04.163 ************************************ 00:45:04.163 START TEST fio_dif_1_default 00:45:04.163 ************************************ 00:45:04.163 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # fio_dif_1 00:45:04.163 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:45:04.163 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:45:04.163 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:45:04.163 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:45:04.163 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:45:04.163 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:04.164 bdev_null0 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:04.164 [2024-05-15 01:11:07.286591] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:45:04.164 [2024-05-15 01:11:07.286910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:04.164 { 00:45:04.164 "params": { 00:45:04.164 "name": "Nvme$subsystem", 00:45:04.164 "trtype": "$TEST_TRANSPORT", 00:45:04.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:04.164 "adrfam": "ipv4", 00:45:04.164 "trsvcid": "$NVMF_PORT", 00:45:04.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:04.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:04.164 "hdgst": ${hdgst:-false}, 00:45:04.164 "ddgst": ${ddgst:-false} 00:45:04.164 }, 00:45:04.164 "method": "bdev_nvme_attach_controller" 00:45:04.164 } 00:45:04.164 EOF 00:45:04.164 )") 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local sanitizers 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # shift 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local asan_lib= 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libasan 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:04.164 "params": { 00:45:04.164 "name": "Nvme0", 00:45:04.164 "trtype": "tcp", 00:45:04.164 "traddr": "10.0.0.2", 00:45:04.164 "adrfam": "ipv4", 00:45:04.164 "trsvcid": "4420", 00:45:04.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:04.164 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:04.164 "hdgst": false, 00:45:04.164 "ddgst": false 00:45:04.164 }, 00:45:04.164 "method": "bdev_nvme_attach_controller" 00:45:04.164 }' 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:45:04.164 01:11:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:04.422 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:04.422 fio-3.35 00:45:04.422 Starting 1 thread 00:45:16.619 00:45:16.619 filename0: (groupid=0, jobs=1): err= 0: pid=116075: Wed May 15 01:11:18 2024 00:45:16.619 read: IOPS=3652, BW=14.3MiB/s (15.0MB/s)(143MiB/10019msec) 00:45:16.619 slat (usec): min=7, max=108, avg= 8.87, stdev= 3.25 00:45:16.619 clat (usec): min=435, max=42664, avg=1068.35, stdev=4791.72 00:45:16.619 lat (usec): min=442, max=42678, avg=1077.22, stdev=4791.97 00:45:16.619 clat percentiles (usec): 00:45:16.619 | 1.00th=[ 457], 5.00th=[ 465], 10.00th=[ 469], 20.00th=[ 478], 00:45:16.619 | 30.00th=[ 482], 40.00th=[ 486], 50.00th=[ 490], 60.00th=[ 494], 00:45:16.619 | 70.00th=[ 502], 80.00th=[ 506], 90.00th=[ 529], 95.00th=[ 562], 00:45:16.619 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:45:16.619 | 99.99th=[42730] 00:45:16.619 bw ( KiB/s): min= 3040, max=31104, per=100.00%, avg=14636.80, stdev=7942.40, samples=20 00:45:16.619 iops : min= 760, max= 7776, avg=3659.20, stdev=1985.60, samples=20 00:45:16.619 lat (usec) : 500=69.90%, 750=28.64%, 1000=0.03% 00:45:16.619 lat (msec) : 4=0.01%, 10=0.01%, 50=1.41% 00:45:16.619 cpu : usr=87.47%, sys=10.51%, ctx=35, majf=0, minf=0 00:45:16.619 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:16.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:16.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:16.619 issued rwts: total=36596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:16.619 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:16.619 00:45:16.619 Run status group 0 (all jobs): 00:45:16.619 READ: bw=14.3MiB/s (15.0MB/s), 14.3MiB/s-14.3MiB/s (15.0MB/s-15.0MB/s), io=143MiB (150MB), run=10019-10019msec 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:16.619 00:45:16.619 real 0m11.025s 00:45:16.619 user 0m9.420s 00:45:16.619 sys 0m1.323s 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # xtrace_disable 00:45:16.619 ************************************ 00:45:16.619 END TEST fio_dif_1_default 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:16.619 ************************************ 00:45:16.619 01:11:18 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:45:16.619 01:11:18 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:45:16.619 01:11:18 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:45:16.619 01:11:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:16.619 ************************************ 00:45:16.619 START TEST fio_dif_1_multi_subsystems 00:45:16.619 ************************************ 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # fio_dif_1_multi_subsystems 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:16.619 bdev_null0 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:16.619 [2024-05-15 01:11:18.359993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:16.619 bdev_null1 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:16.619 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:16.619 { 00:45:16.619 "params": { 00:45:16.620 "name": "Nvme$subsystem", 00:45:16.620 "trtype": "$TEST_TRANSPORT", 00:45:16.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:16.620 "adrfam": "ipv4", 00:45:16.620 "trsvcid": "$NVMF_PORT", 00:45:16.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:16.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:16.620 "hdgst": ${hdgst:-false}, 00:45:16.620 "ddgst": ${ddgst:-false} 00:45:16.620 }, 00:45:16.620 "method": "bdev_nvme_attach_controller" 00:45:16.620 } 00:45:16.620 EOF 00:45:16.620 )") 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local sanitizers 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # shift 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local asan_lib= 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:16.620 { 00:45:16.620 "params": { 00:45:16.620 "name": "Nvme$subsystem", 00:45:16.620 "trtype": "$TEST_TRANSPORT", 00:45:16.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:16.620 "adrfam": "ipv4", 00:45:16.620 "trsvcid": "$NVMF_PORT", 00:45:16.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:16.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:16.620 "hdgst": ${hdgst:-false}, 00:45:16.620 "ddgst": ${ddgst:-false} 00:45:16.620 }, 00:45:16.620 "method": "bdev_nvme_attach_controller" 00:45:16.620 } 00:45:16.620 EOF 00:45:16.620 )") 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libasan 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:16.620 "params": { 00:45:16.620 "name": "Nvme0", 00:45:16.620 "trtype": "tcp", 00:45:16.620 "traddr": "10.0.0.2", 00:45:16.620 "adrfam": "ipv4", 00:45:16.620 "trsvcid": "4420", 00:45:16.620 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:16.620 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:16.620 "hdgst": false, 00:45:16.620 "ddgst": false 00:45:16.620 }, 00:45:16.620 "method": "bdev_nvme_attach_controller" 00:45:16.620 },{ 00:45:16.620 "params": { 00:45:16.620 "name": "Nvme1", 00:45:16.620 "trtype": "tcp", 00:45:16.620 "traddr": "10.0.0.2", 00:45:16.620 "adrfam": "ipv4", 00:45:16.620 "trsvcid": "4420", 00:45:16.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:16.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:16.620 "hdgst": false, 00:45:16.620 "ddgst": false 00:45:16.620 }, 00:45:16.620 "method": "bdev_nvme_attach_controller" 00:45:16.620 }' 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:45:16.620 01:11:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:16.620 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:16.620 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:16.620 fio-3.35 00:45:16.620 Starting 2 threads 00:45:26.610 00:45:26.610 filename0: (groupid=0, jobs=1): err= 0: pid=116230: Wed May 15 01:11:29 2024 00:45:26.610 read: IOPS=184, BW=738KiB/s (756kB/s)(7408KiB/10033msec) 00:45:26.610 slat (usec): min=6, max=104, avg=13.30, stdev=10.81 00:45:26.610 clat (usec): min=452, max=42259, avg=21622.97, stdev=20271.87 00:45:26.610 lat (usec): min=460, max=42307, avg=21636.27, stdev=20271.85 00:45:26.610 clat percentiles (usec): 00:45:26.610 | 1.00th=[ 465], 5.00th=[ 486], 10.00th=[ 498], 20.00th=[ 537], 00:45:26.610 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[40633], 60.00th=[41157], 00:45:26.610 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:45:26.610 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:45:26.610 | 99.99th=[42206] 00:45:26.610 bw ( KiB/s): min= 448, max= 1216, per=48.43%, avg=739.20, stdev=189.71, samples=20 00:45:26.610 iops : min= 112, max= 304, avg=184.80, stdev=47.43, samples=20 00:45:26.610 lat (usec) : 500=10.10%, 750=32.34%, 1000=4.97% 00:45:26.610 lat (msec) : 2=0.54%, 10=0.22%, 50=51.84% 00:45:26.610 cpu : usr=95.64%, sys=3.54%, ctx=126, majf=0, minf=9 00:45:26.610 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:26.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:26.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:26.610 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:26.610 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:26.610 filename1: (groupid=0, jobs=1): err= 0: pid=116231: Wed May 15 01:11:29 2024 00:45:26.610 read: IOPS=196, BW=788KiB/s (807kB/s)(7904KiB/10035msec) 00:45:26.610 slat (nsec): min=7601, max=57303, avg=11361.57, stdev=7238.37 00:45:26.610 clat (usec): min=447, max=42007, avg=20275.23, stdev=20247.97 00:45:26.610 lat (usec): min=455, max=42018, avg=20286.59, stdev=20248.10 00:45:26.610 clat percentiles (usec): 00:45:26.610 | 1.00th=[ 461], 5.00th=[ 469], 10.00th=[ 478], 20.00th=[ 498], 00:45:26.610 | 30.00th=[ 529], 40.00th=[ 635], 50.00th=[ 922], 60.00th=[41157], 00:45:26.610 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:45:26.610 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:45:26.610 | 99.99th=[42206] 00:45:26.610 bw ( KiB/s): min= 448, max= 1184, per=51.64%, avg=788.80, stdev=215.11, samples=20 00:45:26.610 iops : min= 112, max= 296, avg=197.20, stdev=53.78, samples=20 00:45:26.610 lat (usec) : 500=22.32%, 750=19.38%, 1000=9.21% 00:45:26.610 lat (msec) : 2=0.30%, 10=0.20%, 50=48.58% 00:45:26.610 cpu : usr=96.26%, sys=3.29%, ctx=15, majf=0, minf=0 00:45:26.610 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:26.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:26.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:26.610 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:26.610 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:26.610 00:45:26.610 Run status group 0 (all jobs): 00:45:26.611 READ: bw=1526KiB/s (1562kB/s), 738KiB/s-788KiB/s (756kB/s-807kB/s), io=15.0MiB (15.7MB), run=10033-10035msec 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:26.611 00:45:26.611 real 0m11.272s 00:45:26.611 user 0m20.106s 00:45:26.611 sys 0m0.955s 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # xtrace_disable 00:45:26.611 01:11:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:26.611 ************************************ 00:45:26.611 END TEST fio_dif_1_multi_subsystems 00:45:26.611 ************************************ 00:45:26.611 01:11:29 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:26.611 01:11:29 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:45:26.611 01:11:29 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:45:26.611 01:11:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:26.611 ************************************ 00:45:26.611 START TEST fio_dif_rand_params 00:45:26.611 ************************************ 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # fio_dif_rand_params 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:26.611 bdev_null0 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:26.611 [2024-05-15 01:11:29.678151] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:45:26.611 01:11:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:26.611 { 00:45:26.611 "params": { 00:45:26.611 "name": "Nvme$subsystem", 00:45:26.611 "trtype": "$TEST_TRANSPORT", 00:45:26.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:26.611 "adrfam": "ipv4", 00:45:26.611 "trsvcid": "$NVMF_PORT", 00:45:26.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:26.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:26.612 "hdgst": ${hdgst:-false}, 00:45:26.612 "ddgst": ${ddgst:-false} 00:45:26.612 }, 00:45:26.612 "method": "bdev_nvme_attach_controller" 00:45:26.612 } 00:45:26.612 EOF 00:45:26.612 )") 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:26.612 "params": { 00:45:26.612 "name": "Nvme0", 00:45:26.612 "trtype": "tcp", 00:45:26.612 "traddr": "10.0.0.2", 00:45:26.612 "adrfam": "ipv4", 00:45:26.612 "trsvcid": "4420", 00:45:26.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:26.612 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:26.612 "hdgst": false, 00:45:26.612 "ddgst": false 00:45:26.612 }, 00:45:26.612 "method": "bdev_nvme_attach_controller" 00:45:26.612 }' 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:45:26.612 01:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:26.870 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:26.870 ... 00:45:26.870 fio-3.35 00:45:26.870 Starting 3 threads 00:45:33.423 00:45:33.423 filename0: (groupid=0, jobs=1): err= 0: pid=116383: Wed May 15 01:11:35 2024 00:45:33.423 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(141MiB/5006msec) 00:45:33.423 slat (nsec): min=7713, max=51213, avg=13043.14, stdev=4713.61 00:45:33.423 clat (usec): min=7047, max=54190, avg=13258.07, stdev=4792.68 00:45:33.423 lat (usec): min=7058, max=54216, avg=13271.12, stdev=4793.22 00:45:33.423 clat percentiles (usec): 00:45:33.423 | 1.00th=[ 7898], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11731], 00:45:33.423 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173], 00:45:33.423 | 70.00th=[13566], 80.00th=[14091], 90.00th=[14615], 95.00th=[15270], 00:45:33.423 | 99.00th=[51119], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:45:33.423 | 99.99th=[54264] 00:45:33.423 bw ( KiB/s): min=23808, max=31744, per=33.49%, avg=28876.80, stdev=2400.89, samples=10 00:45:33.423 iops : min= 186, max= 248, avg=225.60, stdev=18.76, samples=10 00:45:33.423 lat (msec) : 10=5.13%, 20=93.37%, 50=0.44%, 100=1.06% 00:45:33.423 cpu : usr=92.85%, sys=5.67%, ctx=34, majf=0, minf=0 00:45:33.423 IO depths : 1=2.7%, 2=97.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:33.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:33.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:33.423 issued rwts: total=1131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:33.423 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:33.423 filename0: (groupid=0, jobs=1): err= 0: pid=116384: Wed May 15 01:11:35 2024 00:45:33.423 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(122MiB/5006msec) 00:45:33.423 slat (nsec): min=5618, max=35956, avg=13104.04, stdev=3960.93 00:45:33.423 clat (usec): min=4301, max=59661, avg=15393.33, stdev=3684.42 00:45:33.423 lat (usec): min=4311, max=59683, avg=15406.43, stdev=3684.19 00:45:33.423 clat percentiles (usec): 00:45:33.423 | 1.00th=[ 7177], 5.00th=[ 8848], 10.00th=[10421], 20.00th=[14484], 00:45:33.424 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15664], 60.00th=[16057], 00:45:33.424 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17957], 95.00th=[19006], 00:45:33.424 | 99.00th=[20841], 99.50th=[25560], 99.90th=[59507], 99.95th=[59507], 00:45:33.424 | 99.99th=[59507] 00:45:33.424 bw ( KiB/s): min=20480, max=31744, per=28.86%, avg=24883.20, stdev=3393.65, samples=10 00:45:33.424 iops : min= 160, max= 248, avg=194.40, stdev=26.51, samples=10 00:45:33.424 lat (msec) : 10=9.14%, 20=87.99%, 50=2.57%, 100=0.31% 00:45:33.424 cpu : usr=92.55%, sys=6.19%, ctx=22, majf=0, minf=0 00:45:33.424 IO depths : 1=8.5%, 2=91.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:33.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:33.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:33.424 issued rwts: total=974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:33.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:33.424 filename0: (groupid=0, jobs=1): err= 0: pid=116385: Wed May 15 01:11:35 2024 00:45:33.424 read: IOPS=253, BW=31.7MiB/s (33.2MB/s)(159MiB/5007msec) 00:45:33.424 slat (nsec): min=5409, max=73434, avg=15048.73, stdev=4916.24 00:45:33.424 clat (usec): min=6397, max=57056, avg=11825.62, stdev=4279.64 00:45:33.424 lat (usec): min=6410, max=57090, avg=11840.67, stdev=4279.61 00:45:33.424 clat percentiles (usec): 00:45:33.424 | 1.00th=[ 7439], 5.00th=[ 8455], 10.00th=[ 9896], 20.00th=[10552], 00:45:33.424 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:45:33.424 | 70.00th=[12125], 80.00th=[12649], 90.00th=[13435], 95.00th=[14222], 00:45:33.424 | 99.00th=[20579], 99.50th=[51643], 99.90th=[56886], 99.95th=[56886], 00:45:33.424 | 99.99th=[56886] 00:45:33.424 bw ( KiB/s): min=27648, max=35072, per=37.56%, avg=32384.00, stdev=2557.87, samples=10 00:45:33.424 iops : min= 216, max= 274, avg=253.00, stdev=19.98, samples=10 00:45:33.424 lat (msec) : 10=11.04%, 20=87.93%, 50=0.08%, 100=0.95% 00:45:33.424 cpu : usr=92.55%, sys=5.75%, ctx=61, majf=0, minf=0 00:45:33.424 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:33.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:33.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:33.424 issued rwts: total=1268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:33.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:33.424 00:45:33.424 Run status group 0 (all jobs): 00:45:33.424 READ: bw=84.2MiB/s (88.3MB/s), 24.3MiB/s-31.7MiB/s (25.5MB/s-33.2MB/s), io=422MiB (442MB), run=5006-5007msec 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.424 bdev_null0 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.424 [2024-05-15 01:11:35.724267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.424 bdev_null1 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.424 bdev_null2 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.424 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:33.425 { 00:45:33.425 "params": { 00:45:33.425 "name": "Nvme$subsystem", 00:45:33.425 "trtype": "$TEST_TRANSPORT", 00:45:33.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:33.425 "adrfam": "ipv4", 00:45:33.425 "trsvcid": "$NVMF_PORT", 00:45:33.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:33.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:33.425 "hdgst": ${hdgst:-false}, 00:45:33.425 "ddgst": ${ddgst:-false} 00:45:33.425 }, 00:45:33.425 "method": "bdev_nvme_attach_controller" 00:45:33.425 } 00:45:33.425 EOF 00:45:33.425 )") 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:33.425 { 00:45:33.425 "params": { 00:45:33.425 "name": "Nvme$subsystem", 00:45:33.425 "trtype": "$TEST_TRANSPORT", 00:45:33.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:33.425 "adrfam": "ipv4", 00:45:33.425 "trsvcid": "$NVMF_PORT", 00:45:33.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:33.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:33.425 "hdgst": ${hdgst:-false}, 00:45:33.425 "ddgst": ${ddgst:-false} 00:45:33.425 }, 00:45:33.425 "method": "bdev_nvme_attach_controller" 00:45:33.425 } 00:45:33.425 EOF 00:45:33.425 )") 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:33.425 { 00:45:33.425 "params": { 00:45:33.425 "name": "Nvme$subsystem", 00:45:33.425 "trtype": "$TEST_TRANSPORT", 00:45:33.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:33.425 "adrfam": "ipv4", 00:45:33.425 "trsvcid": "$NVMF_PORT", 00:45:33.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:33.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:33.425 "hdgst": ${hdgst:-false}, 00:45:33.425 "ddgst": ${ddgst:-false} 00:45:33.425 }, 00:45:33.425 "method": "bdev_nvme_attach_controller" 00:45:33.425 } 00:45:33.425 EOF 00:45:33.425 )") 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:33.425 "params": { 00:45:33.425 "name": "Nvme0", 00:45:33.425 "trtype": "tcp", 00:45:33.425 "traddr": "10.0.0.2", 00:45:33.425 "adrfam": "ipv4", 00:45:33.425 "trsvcid": "4420", 00:45:33.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:33.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:33.425 "hdgst": false, 00:45:33.425 "ddgst": false 00:45:33.425 }, 00:45:33.425 "method": "bdev_nvme_attach_controller" 00:45:33.425 },{ 00:45:33.425 "params": { 00:45:33.425 "name": "Nvme1", 00:45:33.425 "trtype": "tcp", 00:45:33.425 "traddr": "10.0.0.2", 00:45:33.425 "adrfam": "ipv4", 00:45:33.425 "trsvcid": "4420", 00:45:33.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:33.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:33.425 "hdgst": false, 00:45:33.425 "ddgst": false 00:45:33.425 }, 00:45:33.425 "method": "bdev_nvme_attach_controller" 00:45:33.425 },{ 00:45:33.425 "params": { 00:45:33.425 "name": "Nvme2", 00:45:33.425 "trtype": "tcp", 00:45:33.425 "traddr": "10.0.0.2", 00:45:33.425 "adrfam": "ipv4", 00:45:33.425 "trsvcid": "4420", 00:45:33.425 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:33.425 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:33.425 "hdgst": false, 00:45:33.425 "ddgst": false 00:45:33.425 }, 00:45:33.425 "method": "bdev_nvme_attach_controller" 00:45:33.425 }' 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:45:33.425 01:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:33.425 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:33.425 ... 00:45:33.425 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:33.425 ... 00:45:33.425 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:33.425 ... 00:45:33.425 fio-3.35 00:45:33.425 Starting 24 threads 00:45:45.639 00:45:45.639 filename0: (groupid=0, jobs=1): err= 0: pid=116487: Wed May 15 01:11:46 2024 00:45:45.639 read: IOPS=277, BW=1112KiB/s (1138kB/s)(10.9MiB/10028msec) 00:45:45.639 slat (usec): min=7, max=4020, avg=13.40, stdev=76.24 00:45:45.639 clat (msec): min=2, max=127, avg=57.41, stdev=19.00 00:45:45.639 lat (msec): min=2, max=127, avg=57.43, stdev=19.00 00:45:45.639 clat percentiles (msec): 00:45:45.639 | 1.00th=[ 4], 5.00th=[ 32], 10.00th=[ 40], 20.00th=[ 46], 00:45:45.639 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 60], 00:45:45.639 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 91], 00:45:45.639 | 99.00th=[ 108], 99.50th=[ 117], 99.90th=[ 128], 99.95th=[ 128], 00:45:45.639 | 99.99th=[ 128] 00:45:45.639 bw ( KiB/s): min= 896, max= 1884, per=5.38%, avg=1111.80, stdev=204.81, samples=20 00:45:45.639 iops : min= 224, max= 471, avg=277.95, stdev=51.20, samples=20 00:45:45.639 lat (msec) : 4=1.40%, 10=2.05%, 20=0.25%, 50=31.36%, 100=62.97% 00:45:45.639 lat (msec) : 250=1.97% 00:45:45.639 cpu : usr=45.05%, sys=0.97%, ctx=1301, majf=0, minf=0 00:45:45.639 IO depths : 1=0.1%, 2=0.4%, 4=5.3%, 8=80.7%, 16=13.5%, 32=0.0%, >=64=0.0% 00:45:45.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.639 complete : 0=0.0%, 4=88.9%, 8=6.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.639 issued rwts: total=2787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.639 filename0: (groupid=0, jobs=1): err= 0: pid=116488: Wed May 15 01:11:46 2024 00:45:45.639 read: IOPS=222, BW=890KiB/s (911kB/s)(8952KiB/10063msec) 00:45:45.639 slat (usec): min=5, max=8030, avg=20.50, stdev=239.71 00:45:45.639 clat (msec): min=32, max=191, avg=71.83, stdev=22.42 00:45:45.639 lat (msec): min=32, max=191, avg=71.85, stdev=22.42 00:45:45.639 clat percentiles (msec): 00:45:45.639 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:45:45.639 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 73], 00:45:45.639 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 97], 95.00th=[ 116], 00:45:45.639 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 192], 99.95th=[ 192], 00:45:45.639 | 99.99th=[ 192] 00:45:45.639 bw ( KiB/s): min= 664, max= 1120, per=4.30%, avg=888.85, stdev=126.01, samples=20 00:45:45.639 iops : min= 166, max= 280, avg=222.20, stdev=31.50, samples=20 00:45:45.639 lat (msec) : 50=20.38%, 100=71.05%, 250=8.58% 00:45:45.639 cpu : usr=35.23%, sys=0.70%, ctx=1005, majf=0, minf=9 00:45:45.639 IO depths : 1=1.0%, 2=2.1%, 4=8.1%, 8=76.1%, 16=12.8%, 32=0.0%, >=64=0.0% 00:45:45.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.639 complete : 0=0.0%, 4=89.5%, 8=6.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.639 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.639 filename0: (groupid=0, jobs=1): err= 0: pid=116489: Wed May 15 01:11:46 2024 00:45:45.639 read: IOPS=201, BW=808KiB/s (827kB/s)(8104KiB/10033msec) 00:45:45.639 slat (usec): min=5, max=7034, avg=18.62, stdev=180.16 00:45:45.639 clat (msec): min=33, max=200, avg=79.08, stdev=22.41 00:45:45.639 lat (msec): min=33, max=200, avg=79.10, stdev=22.41 00:45:45.639 clat percentiles (msec): 00:45:45.639 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 62], 00:45:45.639 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 82], 00:45:45.639 | 70.00th=[ 88], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 122], 00:45:45.639 | 99.00th=[ 138], 99.50th=[ 157], 99.90th=[ 201], 99.95th=[ 201], 00:45:45.639 | 99.99th=[ 201] 00:45:45.639 bw ( KiB/s): min= 640, max= 1024, per=3.84%, avg=793.37, stdev=101.15, samples=19 00:45:45.639 iops : min= 160, max= 256, avg=198.32, stdev=25.28, samples=19 00:45:45.639 lat (msec) : 50=8.74%, 100=75.27%, 250=15.99% 00:45:45.639 cpu : usr=42.25%, sys=0.72%, ctx=1319, majf=0, minf=9 00:45:45.639 IO depths : 1=1.7%, 2=3.6%, 4=12.0%, 8=70.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:45:45.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.639 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.639 issued rwts: total=2026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.639 filename0: (groupid=0, jobs=1): err= 0: pid=116490: Wed May 15 01:11:46 2024 00:45:45.639 read: IOPS=220, BW=880KiB/s (901kB/s)(8840KiB/10044msec) 00:45:45.639 slat (usec): min=4, max=11045, avg=33.02, stdev=413.69 00:45:45.639 clat (msec): min=26, max=152, avg=72.30, stdev=20.09 00:45:45.639 lat (msec): min=26, max=152, avg=72.34, stdev=20.10 00:45:45.639 clat percentiles (msec): 00:45:45.639 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:45:45.639 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:45:45.639 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 100], 95.00th=[ 109], 00:45:45.639 | 99.00th=[ 127], 99.50th=[ 133], 99.90th=[ 142], 99.95th=[ 153], 00:45:45.639 | 99.99th=[ 153] 00:45:45.639 bw ( KiB/s): min= 640, max= 1072, per=4.25%, avg=877.65, stdev=120.42, samples=20 00:45:45.639 iops : min= 160, max= 268, avg=219.40, stdev=30.10, samples=20 00:45:45.639 lat (msec) : 50=16.02%, 100=74.43%, 250=9.55% 00:45:45.639 cpu : usr=33.43%, sys=0.52%, ctx=922, majf=0, minf=9 00:45:45.639 IO depths : 1=1.5%, 2=3.4%, 4=11.4%, 8=71.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:45:45.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.639 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.639 issued rwts: total=2210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.639 filename0: (groupid=0, jobs=1): err= 0: pid=116491: Wed May 15 01:11:46 2024 00:45:45.639 read: IOPS=193, BW=775KiB/s (794kB/s)(7780KiB/10039msec) 00:45:45.639 slat (usec): min=5, max=4020, avg=16.24, stdev=91.24 00:45:45.639 clat (msec): min=34, max=183, avg=82.46, stdev=23.68 00:45:45.639 lat (msec): min=35, max=183, avg=82.48, stdev=23.68 00:45:45.639 clat percentiles (msec): 00:45:45.639 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 67], 00:45:45.639 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 84], 00:45:45.639 | 70.00th=[ 90], 80.00th=[ 97], 90.00th=[ 115], 95.00th=[ 128], 00:45:45.639 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 176], 99.95th=[ 184], 00:45:45.639 | 99.99th=[ 184] 00:45:45.639 bw ( KiB/s): min= 552, max= 1048, per=3.75%, avg=774.00, stdev=108.32, samples=19 00:45:45.639 iops : min= 138, max= 262, avg=193.47, stdev=27.10, samples=19 00:45:45.639 lat (msec) : 50=7.10%, 100=74.09%, 250=18.82% 00:45:45.639 cpu : usr=39.03%, sys=0.70%, ctx=1162, majf=0, minf=9 00:45:45.639 IO depths : 1=2.7%, 2=6.0%, 4=16.0%, 8=65.3%, 16=10.0%, 32=0.0%, >=64=0.0% 00:45:45.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.639 complete : 0=0.0%, 4=91.6%, 8=2.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.639 issued rwts: total=1945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.639 filename0: (groupid=0, jobs=1): err= 0: pid=116492: Wed May 15 01:11:46 2024 00:45:45.639 read: IOPS=193, BW=775KiB/s (794kB/s)(7776KiB/10028msec) 00:45:45.639 slat (usec): min=3, max=8058, avg=25.09, stdev=291.85 00:45:45.639 clat (msec): min=36, max=175, avg=82.23, stdev=19.52 00:45:45.639 lat (msec): min=36, max=175, avg=82.26, stdev=19.52 00:45:45.639 clat percentiles (msec): 00:45:45.639 | 1.00th=[ 47], 5.00th=[ 56], 10.00th=[ 60], 20.00th=[ 70], 00:45:45.639 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 84], 00:45:45.639 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 120], 00:45:45.639 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 176], 99.95th=[ 176], 00:45:45.639 | 99.99th=[ 176] 00:45:45.639 bw ( KiB/s): min= 640, max= 896, per=3.74%, avg=771.05, stdev=69.40, samples=19 00:45:45.639 iops : min= 160, max= 224, avg=192.74, stdev=17.35, samples=19 00:45:45.639 lat (msec) : 50=3.81%, 100=80.56%, 250=15.64% 00:45:45.639 cpu : usr=35.48%, sys=0.72%, ctx=991, majf=0, minf=9 00:45:45.639 IO depths : 1=2.1%, 2=5.0%, 4=14.4%, 8=66.9%, 16=11.6%, 32=0.0%, >=64=0.0% 00:45:45.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.639 complete : 0=0.0%, 4=91.5%, 8=4.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.639 issued rwts: total=1944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.639 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.639 filename0: (groupid=0, jobs=1): err= 0: pid=116493: Wed May 15 01:11:46 2024 00:45:45.639 read: IOPS=200, BW=802KiB/s (822kB/s)(8040KiB/10021msec) 00:45:45.639 slat (usec): min=5, max=8034, avg=24.85, stdev=273.79 00:45:45.639 clat (msec): min=33, max=200, avg=79.56, stdev=24.67 00:45:45.640 lat (msec): min=33, max=200, avg=79.58, stdev=24.66 00:45:45.640 clat percentiles (msec): 00:45:45.640 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 59], 00:45:45.640 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:45:45.640 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 110], 95.00th=[ 129], 00:45:45.640 | 99.00th=[ 153], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 201], 00:45:45.640 | 99.99th=[ 201] 00:45:45.640 bw ( KiB/s): min= 640, max= 1152, per=3.89%, avg=802.95, stdev=163.72, samples=19 00:45:45.640 iops : min= 160, max= 288, avg=200.74, stdev=40.93, samples=19 00:45:45.640 lat (msec) : 50=12.04%, 100=71.49%, 250=16.47% 00:45:45.640 cpu : usr=35.93%, sys=0.82%, ctx=989, majf=0, minf=9 00:45:45.640 IO depths : 1=2.0%, 2=4.6%, 4=13.4%, 8=68.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:45:45.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 complete : 0=0.0%, 4=91.1%, 8=3.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.640 filename0: (groupid=0, jobs=1): err= 0: pid=116494: Wed May 15 01:11:46 2024 00:45:45.640 read: IOPS=190, BW=763KiB/s (782kB/s)(7640KiB/10009msec) 00:45:45.640 slat (usec): min=5, max=8034, avg=26.75, stdev=317.63 00:45:45.640 clat (msec): min=35, max=164, avg=83.63, stdev=20.78 00:45:45.640 lat (msec): min=35, max=164, avg=83.65, stdev=20.78 00:45:45.640 clat percentiles (msec): 00:45:45.640 | 1.00th=[ 47], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 72], 00:45:45.640 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 85], 00:45:45.640 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:45:45.640 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 165], 99.95th=[ 165], 00:45:45.640 | 99.99th=[ 165] 00:45:45.640 bw ( KiB/s): min= 640, max= 896, per=3.70%, avg=763.79, stdev=69.96, samples=19 00:45:45.640 iops : min= 160, max= 224, avg=190.95, stdev=17.49, samples=19 00:45:45.640 lat (msec) : 50=4.40%, 100=81.36%, 250=14.24% 00:45:45.640 cpu : usr=32.69%, sys=0.68%, ctx=930, majf=0, minf=9 00:45:45.640 IO depths : 1=2.6%, 2=6.0%, 4=16.1%, 8=65.0%, 16=10.4%, 32=0.0%, >=64=0.0% 00:45:45.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 issued rwts: total=1910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.640 filename1: (groupid=0, jobs=1): err= 0: pid=116495: Wed May 15 01:11:46 2024 00:45:45.640 read: IOPS=237, BW=950KiB/s (973kB/s)(9528KiB/10027msec) 00:45:45.640 slat (nsec): min=6244, max=58358, avg=13052.79, stdev=7243.34 00:45:45.640 clat (msec): min=2, max=190, avg=67.21, stdev=24.71 00:45:45.640 lat (msec): min=2, max=190, avg=67.23, stdev=24.71 00:45:45.640 clat percentiles (msec): 00:45:45.640 | 1.00th=[ 4], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 48], 00:45:45.640 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:45:45.640 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 109], 00:45:45.640 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 192], 99.95th=[ 192], 00:45:45.640 | 99.99th=[ 192] 00:45:45.640 bw ( KiB/s): min= 640, max= 1795, per=4.59%, avg=946.55, stdev=228.60, samples=20 00:45:45.640 iops : min= 160, max= 448, avg=236.60, stdev=57.00, samples=20 00:45:45.640 lat (msec) : 4=1.18%, 10=2.18%, 20=1.34%, 50=20.99%, 100=67.21% 00:45:45.640 lat (msec) : 250=7.09% 00:45:45.640 cpu : usr=36.35%, sys=0.87%, ctx=1119, majf=0, minf=9 00:45:45.640 IO depths : 1=1.2%, 2=2.8%, 4=12.1%, 8=72.0%, 16=12.0%, 32=0.0%, >=64=0.0% 00:45:45.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 issued rwts: total=2382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.640 filename1: (groupid=0, jobs=1): err= 0: pid=116496: Wed May 15 01:11:46 2024 00:45:45.640 read: IOPS=213, BW=853KiB/s (874kB/s)(8580KiB/10054msec) 00:45:45.640 slat (usec): min=5, max=9038, avg=36.68, stdev=415.15 00:45:45.640 clat (msec): min=30, max=167, avg=74.65, stdev=22.74 00:45:45.640 lat (msec): min=30, max=167, avg=74.69, stdev=22.74 00:45:45.640 clat percentiles (msec): 00:45:45.640 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:45:45.640 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:45:45.640 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 120], 00:45:45.640 | 99.00th=[ 144], 99.50th=[ 163], 99.90th=[ 169], 99.95th=[ 169], 00:45:45.640 | 99.99th=[ 169] 00:45:45.640 bw ( KiB/s): min= 680, max= 1040, per=4.14%, avg=854.00, stdev=118.09, samples=20 00:45:45.640 iops : min= 170, max= 260, avg=213.50, stdev=29.52, samples=20 00:45:45.640 lat (msec) : 50=14.78%, 100=73.89%, 250=11.33% 00:45:45.640 cpu : usr=35.79%, sys=0.77%, ctx=1003, majf=0, minf=9 00:45:45.640 IO depths : 1=2.0%, 2=4.3%, 4=13.3%, 8=69.4%, 16=11.0%, 32=0.0%, >=64=0.0% 00:45:45.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 complete : 0=0.0%, 4=90.8%, 8=4.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 issued rwts: total=2145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.640 filename1: (groupid=0, jobs=1): err= 0: pid=116497: Wed May 15 01:11:46 2024 00:45:45.640 read: IOPS=202, BW=810KiB/s (830kB/s)(8148KiB/10054msec) 00:45:45.640 slat (usec): min=7, max=8064, avg=20.79, stdev=199.53 00:45:45.640 clat (msec): min=34, max=179, avg=78.72, stdev=23.39 00:45:45.640 lat (msec): min=34, max=179, avg=78.74, stdev=23.39 00:45:45.640 clat percentiles (msec): 00:45:45.640 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 63], 00:45:45.640 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:45:45.640 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 120], 00:45:45.640 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 180], 99.95th=[ 180], 00:45:45.640 | 99.99th=[ 180] 00:45:45.640 bw ( KiB/s): min= 640, max= 944, per=3.93%, avg=810.05, stdev=97.30, samples=20 00:45:45.640 iops : min= 160, max= 236, avg=202.50, stdev=24.31, samples=20 00:45:45.640 lat (msec) : 50=9.52%, 100=78.40%, 250=12.08% 00:45:45.640 cpu : usr=39.55%, sys=0.92%, ctx=1138, majf=0, minf=9 00:45:45.640 IO depths : 1=2.6%, 2=5.4%, 4=14.2%, 8=67.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:45:45.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 issued rwts: total=2037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.640 filename1: (groupid=0, jobs=1): err= 0: pid=116498: Wed May 15 01:11:46 2024 00:45:45.640 read: IOPS=208, BW=832KiB/s (852kB/s)(8360KiB/10046msec) 00:45:45.640 slat (usec): min=5, max=8039, avg=27.22, stdev=290.88 00:45:45.640 clat (msec): min=34, max=140, avg=76.66, stdev=20.24 00:45:45.640 lat (msec): min=34, max=140, avg=76.69, stdev=20.24 00:45:45.640 clat percentiles (msec): 00:45:45.640 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 59], 00:45:45.640 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 78], 00:45:45.640 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 112], 00:45:45.640 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 140], 99.95th=[ 140], 00:45:45.640 | 99.99th=[ 140] 00:45:45.640 bw ( KiB/s): min= 640, max= 1152, per=4.02%, avg=829.30, stdev=120.46, samples=20 00:45:45.640 iops : min= 160, max= 288, avg=207.25, stdev=30.17, samples=20 00:45:45.640 lat (msec) : 50=11.39%, 100=75.60%, 250=13.01% 00:45:45.640 cpu : usr=41.59%, sys=0.80%, ctx=1169, majf=0, minf=9 00:45:45.640 IO depths : 1=1.5%, 2=3.3%, 4=10.4%, 8=72.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:45:45.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.640 filename1: (groupid=0, jobs=1): err= 0: pid=116499: Wed May 15 01:11:46 2024 00:45:45.640 read: IOPS=192, BW=771KiB/s (790kB/s)(7744KiB/10044msec) 00:45:45.640 slat (usec): min=5, max=8057, avg=20.17, stdev=204.35 00:45:45.640 clat (msec): min=36, max=175, avg=82.86, stdev=22.16 00:45:45.640 lat (msec): min=36, max=175, avg=82.88, stdev=22.15 00:45:45.640 clat percentiles (msec): 00:45:45.640 | 1.00th=[ 45], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 69], 00:45:45.640 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 77], 60.00th=[ 82], 00:45:45.640 | 70.00th=[ 91], 80.00th=[ 99], 90.00th=[ 110], 95.00th=[ 128], 00:45:45.640 | 99.00th=[ 157], 99.50th=[ 176], 99.90th=[ 176], 99.95th=[ 176], 00:45:45.640 | 99.99th=[ 176] 00:45:45.640 bw ( KiB/s): min= 512, max= 896, per=3.72%, avg=767.95, stdev=76.01, samples=20 00:45:45.640 iops : min= 128, max= 224, avg=191.95, stdev=19.00, samples=20 00:45:45.640 lat (msec) : 50=4.49%, 100=78.77%, 250=16.74% 00:45:45.640 cpu : usr=42.24%, sys=0.95%, ctx=1184, majf=0, minf=9 00:45:45.640 IO depths : 1=3.6%, 2=7.7%, 4=19.2%, 8=60.5%, 16=9.0%, 32=0.0%, >=64=0.0% 00:45:45.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 complete : 0=0.0%, 4=92.3%, 8=2.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.640 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.640 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.640 filename1: (groupid=0, jobs=1): err= 0: pid=116501: Wed May 15 01:11:46 2024 00:45:45.640 read: IOPS=235, BW=940KiB/s (963kB/s)(9420KiB/10016msec) 00:45:45.640 slat (usec): min=4, max=8028, avg=26.37, stdev=330.01 00:45:45.640 clat (msec): min=34, max=132, avg=67.88, stdev=19.12 00:45:45.641 lat (msec): min=34, max=132, avg=67.91, stdev=19.11 00:45:45.641 clat percentiles (msec): 00:45:45.641 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 48], 00:45:45.641 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:45:45.641 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 106], 00:45:45.641 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:45:45.641 | 99.99th=[ 133] 00:45:45.641 bw ( KiB/s): min= 768, max= 1080, per=4.53%, avg=935.60, stdev=89.89, samples=20 00:45:45.641 iops : min= 192, max= 270, avg=233.90, stdev=22.47, samples=20 00:45:45.641 lat (msec) : 50=25.10%, 100=69.17%, 250=5.73% 00:45:45.641 cpu : usr=32.46%, sys=0.72%, ctx=902, majf=0, minf=9 00:45:45.641 IO depths : 1=0.4%, 2=0.8%, 4=6.2%, 8=78.9%, 16=13.6%, 32=0.0%, >=64=0.0% 00:45:45.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 complete : 0=0.0%, 4=89.2%, 8=6.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 issued rwts: total=2355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.641 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.641 filename1: (groupid=0, jobs=1): err= 0: pid=116502: Wed May 15 01:11:46 2024 00:45:45.641 read: IOPS=195, BW=783KiB/s (802kB/s)(7872KiB/10052msec) 00:45:45.641 slat (usec): min=3, max=8048, avg=21.95, stdev=255.82 00:45:45.641 clat (msec): min=33, max=191, avg=81.48, stdev=24.24 00:45:45.641 lat (msec): min=33, max=191, avg=81.50, stdev=24.24 00:45:45.641 clat percentiles (msec): 00:45:45.641 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 63], 00:45:45.641 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 84], 00:45:45.641 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:45:45.641 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 192], 00:45:45.641 | 99.99th=[ 192] 00:45:45.641 bw ( KiB/s): min= 512, max= 1120, per=3.78%, avg=780.10, stdev=137.68, samples=20 00:45:45.641 iops : min= 128, max= 280, avg=195.00, stdev=34.40, samples=20 00:45:45.641 lat (msec) : 50=10.37%, 100=73.37%, 250=16.26% 00:45:45.641 cpu : usr=33.26%, sys=0.59%, ctx=949, majf=0, minf=9 00:45:45.641 IO depths : 1=2.2%, 2=4.9%, 4=14.3%, 8=67.8%, 16=10.7%, 32=0.0%, >=64=0.0% 00:45:45.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 complete : 0=0.0%, 4=90.9%, 8=3.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.641 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.641 filename1: (groupid=0, jobs=1): err= 0: pid=116503: Wed May 15 01:11:46 2024 00:45:45.641 read: IOPS=206, BW=825KiB/s (845kB/s)(8288KiB/10043msec) 00:45:45.641 slat (usec): min=5, max=8033, avg=23.68, stdev=260.59 00:45:45.641 clat (msec): min=34, max=157, avg=77.42, stdev=21.24 00:45:45.641 lat (msec): min=34, max=157, avg=77.44, stdev=21.24 00:45:45.641 clat percentiles (msec): 00:45:45.641 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 62], 00:45:45.641 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 77], 00:45:45.641 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 121], 00:45:45.641 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 159], 00:45:45.641 | 99.99th=[ 159] 00:45:45.641 bw ( KiB/s): min= 688, max= 992, per=3.98%, avg=822.40, stdev=75.81, samples=20 00:45:45.641 iops : min= 172, max= 248, avg=205.55, stdev=18.96, samples=20 00:45:45.641 lat (msec) : 50=9.60%, 100=76.50%, 250=13.90% 00:45:45.641 cpu : usr=37.47%, sys=0.81%, ctx=1177, majf=0, minf=9 00:45:45.641 IO depths : 1=1.9%, 2=4.5%, 4=12.9%, 8=69.3%, 16=11.4%, 32=0.0%, >=64=0.0% 00:45:45.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 complete : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 issued rwts: total=2072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.641 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.641 filename2: (groupid=0, jobs=1): err= 0: pid=116504: Wed May 15 01:11:46 2024 00:45:45.641 read: IOPS=246, BW=987KiB/s (1010kB/s)(9880KiB/10014msec) 00:45:45.641 slat (usec): min=4, max=8030, avg=15.73, stdev=161.48 00:45:45.641 clat (msec): min=24, max=153, avg=64.68, stdev=19.94 00:45:45.641 lat (msec): min=24, max=153, avg=64.70, stdev=19.95 00:45:45.641 clat percentiles (msec): 00:45:45.641 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:45:45.641 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 67], 00:45:45.641 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 94], 95.00th=[ 103], 00:45:45.641 | 99.00th=[ 123], 99.50th=[ 133], 99.90th=[ 155], 99.95th=[ 155], 00:45:45.641 | 99.99th=[ 155] 00:45:45.641 bw ( KiB/s): min= 640, max= 1168, per=4.78%, avg=987.20, stdev=130.64, samples=20 00:45:45.641 iops : min= 160, max= 292, avg=246.80, stdev=32.66, samples=20 00:45:45.641 lat (msec) : 50=30.93%, 100=63.89%, 250=5.18% 00:45:45.641 cpu : usr=41.98%, sys=1.07%, ctx=1171, majf=0, minf=9 00:45:45.641 IO depths : 1=0.6%, 2=1.3%, 4=7.0%, 8=77.9%, 16=13.0%, 32=0.0%, >=64=0.0% 00:45:45.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 complete : 0=0.0%, 4=89.4%, 8=6.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 issued rwts: total=2470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.641 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.641 filename2: (groupid=0, jobs=1): err= 0: pid=116505: Wed May 15 01:11:46 2024 00:45:45.641 read: IOPS=219, BW=880KiB/s (901kB/s)(8840KiB/10051msec) 00:45:45.641 slat (usec): min=5, max=3555, avg=15.80, stdev=94.31 00:45:45.641 clat (msec): min=34, max=186, avg=72.56, stdev=22.79 00:45:45.641 lat (msec): min=34, max=186, avg=72.58, stdev=22.80 00:45:45.641 clat percentiles (msec): 00:45:45.641 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:45:45.641 | 30.00th=[ 60], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 73], 00:45:45.641 | 70.00th=[ 81], 80.00th=[ 89], 90.00th=[ 102], 95.00th=[ 114], 00:45:45.641 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 186], 99.95th=[ 186], 00:45:45.641 | 99.99th=[ 186] 00:45:45.641 bw ( KiB/s): min= 640, max= 1184, per=4.26%, avg=878.60, stdev=152.47, samples=20 00:45:45.641 iops : min= 160, max= 296, avg=219.60, stdev=38.08, samples=20 00:45:45.641 lat (msec) : 50=17.42%, 100=71.67%, 250=10.90% 00:45:45.641 cpu : usr=39.44%, sys=0.75%, ctx=1231, majf=0, minf=9 00:45:45.641 IO depths : 1=1.7%, 2=3.5%, 4=12.4%, 8=70.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:45:45.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 issued rwts: total=2210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.641 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.641 filename2: (groupid=0, jobs=1): err= 0: pid=116506: Wed May 15 01:11:46 2024 00:45:45.641 read: IOPS=234, BW=939KiB/s (962kB/s)(9440KiB/10050msec) 00:45:45.641 slat (usec): min=5, max=8025, avg=17.54, stdev=165.10 00:45:45.641 clat (msec): min=11, max=143, avg=67.95, stdev=20.31 00:45:45.641 lat (msec): min=11, max=143, avg=67.97, stdev=20.32 00:45:45.641 clat percentiles (msec): 00:45:45.641 | 1.00th=[ 14], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 49], 00:45:45.641 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:45:45.641 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 99], 00:45:45.641 | 99.00th=[ 122], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:45:45.641 | 99.99th=[ 144] 00:45:45.641 bw ( KiB/s): min= 688, max= 1200, per=4.54%, avg=937.45, stdev=139.38, samples=20 00:45:45.641 iops : min= 172, max= 300, avg=234.35, stdev=34.84, samples=20 00:45:45.641 lat (msec) : 20=1.36%, 50=21.40%, 100=72.63%, 250=4.62% 00:45:45.641 cpu : usr=35.42%, sys=0.78%, ctx=1006, majf=0, minf=9 00:45:45.641 IO depths : 1=0.8%, 2=1.9%, 4=8.8%, 8=75.7%, 16=12.8%, 32=0.0%, >=64=0.0% 00:45:45.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 complete : 0=0.0%, 4=89.8%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 issued rwts: total=2360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.641 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.641 filename2: (groupid=0, jobs=1): err= 0: pid=116507: Wed May 15 01:11:46 2024 00:45:45.641 read: IOPS=209, BW=838KiB/s (858kB/s)(8420KiB/10053msec) 00:45:45.641 slat (usec): min=4, max=5030, avg=19.39, stdev=155.04 00:45:45.641 clat (msec): min=35, max=151, avg=76.18, stdev=21.00 00:45:45.641 lat (msec): min=35, max=151, avg=76.20, stdev=21.01 00:45:45.641 clat percentiles (msec): 00:45:45.641 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 58], 00:45:45.641 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 80], 00:45:45.641 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 113], 00:45:45.641 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 153], 00:45:45.641 | 99.99th=[ 153] 00:45:45.641 bw ( KiB/s): min= 560, max= 1024, per=4.05%, avg=835.65, stdev=123.97, samples=20 00:45:45.641 iops : min= 140, max= 256, avg=208.90, stdev=31.00, samples=20 00:45:45.641 lat (msec) : 50=10.83%, 100=75.49%, 250=13.68% 00:45:45.641 cpu : usr=41.36%, sys=0.85%, ctx=1453, majf=0, minf=9 00:45:45.641 IO depths : 1=1.6%, 2=3.4%, 4=11.6%, 8=71.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:45:45.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 complete : 0=0.0%, 4=90.3%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.641 issued rwts: total=2105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.641 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.641 filename2: (groupid=0, jobs=1): err= 0: pid=116508: Wed May 15 01:11:46 2024 00:45:45.642 read: IOPS=202, BW=808KiB/s (827kB/s)(8124KiB/10054msec) 00:45:45.642 slat (usec): min=6, max=8049, avg=22.58, stdev=218.30 00:45:45.642 clat (msec): min=36, max=173, avg=78.94, stdev=20.47 00:45:45.642 lat (msec): min=36, max=173, avg=78.96, stdev=20.49 00:45:45.642 clat percentiles (msec): 00:45:45.642 | 1.00th=[ 43], 5.00th=[ 49], 10.00th=[ 53], 20.00th=[ 65], 00:45:45.642 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 77], 60.00th=[ 81], 00:45:45.642 | 70.00th=[ 86], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 116], 00:45:45.642 | 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 174], 99.95th=[ 174], 00:45:45.642 | 99.99th=[ 174] 00:45:45.642 bw ( KiB/s): min= 640, max= 1024, per=3.91%, avg=806.05, stdev=110.74, samples=20 00:45:45.642 iops : min= 160, max= 256, avg=201.50, stdev=27.69, samples=20 00:45:45.642 lat (msec) : 50=7.78%, 100=79.57%, 250=12.65% 00:45:45.642 cpu : usr=40.68%, sys=0.85%, ctx=1345, majf=0, minf=9 00:45:45.642 IO depths : 1=3.0%, 2=6.3%, 4=16.7%, 8=64.1%, 16=10.0%, 32=0.0%, >=64=0.0% 00:45:45.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.642 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.642 issued rwts: total=2031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.642 filename2: (groupid=0, jobs=1): err= 0: pid=116509: Wed May 15 01:11:46 2024 00:45:45.642 read: IOPS=223, BW=893KiB/s (914kB/s)(8952KiB/10028msec) 00:45:45.642 slat (usec): min=7, max=8031, avg=27.89, stdev=338.68 00:45:45.642 clat (msec): min=11, max=203, avg=71.39, stdev=22.99 00:45:45.642 lat (msec): min=11, max=203, avg=71.42, stdev=23.00 00:45:45.642 clat percentiles (msec): 00:45:45.642 | 1.00th=[ 20], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 50], 00:45:45.642 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 72], 00:45:45.642 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 110], 00:45:45.642 | 99.00th=[ 132], 99.50th=[ 157], 99.90th=[ 205], 99.95th=[ 205], 00:45:45.642 | 99.99th=[ 205] 00:45:45.642 bw ( KiB/s): min= 680, max= 1280, per=4.30%, avg=888.65, stdev=146.90, samples=20 00:45:45.642 iops : min= 170, max= 320, avg=222.15, stdev=36.72, samples=20 00:45:45.642 lat (msec) : 20=1.43%, 50=19.03%, 100=70.38%, 250=9.16% 00:45:45.642 cpu : usr=32.56%, sys=0.63%, ctx=896, majf=0, minf=9 00:45:45.642 IO depths : 1=0.8%, 2=2.1%, 4=9.2%, 8=74.9%, 16=13.0%, 32=0.0%, >=64=0.0% 00:45:45.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.642 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.642 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.642 filename2: (groupid=0, jobs=1): err= 0: pid=116510: Wed May 15 01:11:46 2024 00:45:45.642 read: IOPS=203, BW=816KiB/s (835kB/s)(8196KiB/10048msec) 00:45:45.642 slat (usec): min=7, max=8026, avg=21.52, stdev=216.94 00:45:45.642 clat (msec): min=37, max=201, avg=78.31, stdev=23.40 00:45:45.642 lat (msec): min=37, max=201, avg=78.33, stdev=23.41 00:45:45.642 clat percentiles (msec): 00:45:45.642 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:45:45.642 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:45:45.642 | 70.00th=[ 86], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 120], 00:45:45.642 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 203], 99.95th=[ 203], 00:45:45.642 | 99.99th=[ 203] 00:45:45.642 bw ( KiB/s): min= 640, max= 1024, per=3.94%, avg=812.80, stdev=124.08, samples=20 00:45:45.642 iops : min= 160, max= 256, avg=203.15, stdev=31.05, samples=20 00:45:45.642 lat (msec) : 50=12.25%, 100=72.62%, 250=15.13% 00:45:45.642 cpu : usr=38.14%, sys=0.75%, ctx=1123, majf=0, minf=9 00:45:45.642 IO depths : 1=2.3%, 2=5.0%, 4=14.0%, 8=67.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:45:45.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.642 complete : 0=0.0%, 4=91.0%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.642 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.642 filename2: (groupid=0, jobs=1): err= 0: pid=116511: Wed May 15 01:11:46 2024 00:45:45.642 read: IOPS=237, BW=949KiB/s (972kB/s)(9544KiB/10053msec) 00:45:45.642 slat (usec): min=7, max=4027, avg=14.81, stdev=82.52 00:45:45.642 clat (msec): min=32, max=135, avg=67.22, stdev=19.69 00:45:45.642 lat (msec): min=32, max=135, avg=67.23, stdev=19.69 00:45:45.642 clat percentiles (msec): 00:45:45.642 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 48], 00:45:45.642 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 70], 00:45:45.642 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 106], 00:45:45.642 | 99.00th=[ 121], 99.50th=[ 128], 99.90th=[ 136], 99.95th=[ 136], 00:45:45.642 | 99.99th=[ 136] 00:45:45.642 bw ( KiB/s): min= 672, max= 1152, per=4.60%, avg=949.60, stdev=131.87, samples=20 00:45:45.642 iops : min= 168, max= 288, avg=237.40, stdev=32.97, samples=20 00:45:45.642 lat (msec) : 50=24.69%, 100=68.19%, 250=7.12% 00:45:45.642 cpu : usr=40.67%, sys=0.85%, ctx=1269, majf=0, minf=9 00:45:45.642 IO depths : 1=0.5%, 2=1.2%, 4=7.8%, 8=77.5%, 16=13.0%, 32=0.0%, >=64=0.0% 00:45:45.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.642 complete : 0=0.0%, 4=89.6%, 8=5.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:45.642 issued rwts: total=2386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:45.642 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:45.642 00:45:45.642 Run status group 0 (all jobs): 00:45:45.642 READ: bw=20.1MiB/s (21.1MB/s), 763KiB/s-1112KiB/s (782kB/s-1138kB/s), io=203MiB (213MB), run=10009-10063msec 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:45.642 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.643 bdev_null0 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.643 [2024-05-15 01:11:47.233433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.643 bdev_null1 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:45.643 { 00:45:45.643 "params": { 00:45:45.643 "name": "Nvme$subsystem", 00:45:45.643 "trtype": "$TEST_TRANSPORT", 00:45:45.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:45.643 "adrfam": "ipv4", 00:45:45.643 "trsvcid": "$NVMF_PORT", 00:45:45.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:45.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:45.643 "hdgst": ${hdgst:-false}, 00:45:45.643 "ddgst": ${ddgst:-false} 00:45:45.643 }, 00:45:45.643 "method": "bdev_nvme_attach_controller" 00:45:45.643 } 00:45:45.643 EOF 00:45:45.643 )") 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:45.643 { 00:45:45.643 "params": { 00:45:45.643 "name": "Nvme$subsystem", 00:45:45.643 "trtype": "$TEST_TRANSPORT", 00:45:45.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:45.643 "adrfam": "ipv4", 00:45:45.643 "trsvcid": "$NVMF_PORT", 00:45:45.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:45.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:45.643 "hdgst": ${hdgst:-false}, 00:45:45.643 "ddgst": ${ddgst:-false} 00:45:45.643 }, 00:45:45.643 "method": "bdev_nvme_attach_controller" 00:45:45.643 } 00:45:45.643 EOF 00:45:45.643 )") 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:45.643 "params": { 00:45:45.643 "name": "Nvme0", 00:45:45.643 "trtype": "tcp", 00:45:45.643 "traddr": "10.0.0.2", 00:45:45.643 "adrfam": "ipv4", 00:45:45.643 "trsvcid": "4420", 00:45:45.643 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:45.643 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:45.643 "hdgst": false, 00:45:45.643 "ddgst": false 00:45:45.643 }, 00:45:45.643 "method": "bdev_nvme_attach_controller" 00:45:45.643 },{ 00:45:45.643 "params": { 00:45:45.643 "name": "Nvme1", 00:45:45.643 "trtype": "tcp", 00:45:45.643 "traddr": "10.0.0.2", 00:45:45.643 "adrfam": "ipv4", 00:45:45.643 "trsvcid": "4420", 00:45:45.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:45.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:45.643 "hdgst": false, 00:45:45.643 "ddgst": false 00:45:45.643 }, 00:45:45.643 "method": "bdev_nvme_attach_controller" 00:45:45.643 }' 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:45:45.643 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:45:45.644 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:45:45.644 01:11:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:45.644 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:45.644 ... 00:45:45.644 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:45.644 ... 00:45:45.644 fio-3.35 00:45:45.644 Starting 4 threads 00:45:50.920 00:45:50.920 filename0: (groupid=0, jobs=1): err= 0: pid=116631: Wed May 15 01:11:53 2024 00:45:50.920 read: IOPS=1896, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5001msec) 00:45:50.920 slat (usec): min=7, max=279, avg=25.11, stdev= 7.34 00:45:50.920 clat (usec): min=2316, max=7525, avg=4100.97, stdev=291.05 00:45:50.920 lat (usec): min=2324, max=7535, avg=4126.08, stdev=290.76 00:45:50.920 clat percentiles (usec): 00:45:50.920 | 1.00th=[ 3851], 5.00th=[ 3916], 10.00th=[ 3949], 20.00th=[ 3982], 00:45:50.920 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4080], 00:45:50.920 | 70.00th=[ 4113], 80.00th=[ 4146], 90.00th=[ 4228], 95.00th=[ 4359], 00:45:50.920 | 99.00th=[ 5473], 99.50th=[ 5997], 99.90th=[ 7046], 99.95th=[ 7046], 00:45:50.920 | 99.99th=[ 7504] 00:45:50.920 bw ( KiB/s): min=15104, max=15360, per=25.09%, avg=15251.56, stdev=112.51, samples=9 00:45:50.920 iops : min= 1888, max= 1920, avg=1906.44, stdev=14.06, samples=9 00:45:50.920 lat (msec) : 4=28.40%, 10=71.60% 00:45:50.920 cpu : usr=94.92%, sys=3.54%, ctx=18, majf=0, minf=9 00:45:50.920 IO depths : 1=11.5%, 2=23.9%, 4=51.1%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:50.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.920 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.920 issued rwts: total=9483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:50.920 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:50.920 filename0: (groupid=0, jobs=1): err= 0: pid=116632: Wed May 15 01:11:53 2024 00:45:50.920 read: IOPS=1907, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5004msec) 00:45:50.920 slat (nsec): min=7413, max=56987, avg=9465.02, stdev=4013.89 00:45:50.920 clat (usec): min=1236, max=6622, avg=4143.48, stdev=286.79 00:45:50.920 lat (usec): min=1252, max=6636, avg=4152.95, stdev=286.76 00:45:50.920 clat percentiles (usec): 00:45:50.920 | 1.00th=[ 3654], 5.00th=[ 4047], 10.00th=[ 4047], 20.00th=[ 4080], 00:45:50.920 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4113], 00:45:50.920 | 70.00th=[ 4146], 80.00th=[ 4178], 90.00th=[ 4293], 95.00th=[ 4359], 00:45:50.920 | 99.00th=[ 5473], 99.50th=[ 5538], 99.90th=[ 5604], 99.95th=[ 5735], 00:45:50.920 | 99.99th=[ 6652] 00:45:50.920 bw ( KiB/s): min=15232, max=15664, per=25.23%, avg=15336.89, stdev=152.44, samples=9 00:45:50.920 iops : min= 1904, max= 1958, avg=1917.11, stdev=19.06, samples=9 00:45:50.920 lat (msec) : 2=0.43%, 4=2.01%, 10=97.56% 00:45:50.920 cpu : usr=94.10%, sys=4.60%, ctx=4, majf=0, minf=0 00:45:50.920 IO depths : 1=11.2%, 2=25.0%, 4=50.0%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:50.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.920 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.920 issued rwts: total=9547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:50.920 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:50.920 filename1: (groupid=0, jobs=1): err= 0: pid=116633: Wed May 15 01:11:53 2024 00:45:50.920 read: IOPS=1899, BW=14.8MiB/s (15.6MB/s)(74.2MiB/5001msec) 00:45:50.920 slat (nsec): min=6214, max=67766, avg=16543.50, stdev=11814.52 00:45:50.920 clat (usec): min=1845, max=8045, avg=4143.33, stdev=272.31 00:45:50.920 lat (usec): min=1853, max=8067, avg=4159.87, stdev=269.96 00:45:50.920 clat percentiles (usec): 00:45:50.920 | 1.00th=[ 3785], 5.00th=[ 3916], 10.00th=[ 3982], 20.00th=[ 4047], 00:45:50.920 | 30.00th=[ 4080], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4146], 00:45:50.920 | 70.00th=[ 4146], 80.00th=[ 4178], 90.00th=[ 4228], 95.00th=[ 4359], 00:45:50.920 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 6128], 99.95th=[ 7046], 00:45:50.920 | 99.99th=[ 8029] 00:45:50.920 bw ( KiB/s): min=15104, max=15360, per=25.14%, avg=15280.00, stdev=89.08, samples=9 00:45:50.920 iops : min= 1888, max= 1920, avg=1910.00, stdev=11.14, samples=9 00:45:50.920 lat (msec) : 2=0.03%, 4=12.39%, 10=87.58% 00:45:50.920 cpu : usr=94.80%, sys=3.88%, ctx=4, majf=0, minf=9 00:45:50.920 IO depths : 1=10.7%, 2=22.4%, 4=52.6%, 8=14.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:50.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.920 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.920 issued rwts: total=9499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:50.920 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:50.920 filename1: (groupid=0, jobs=1): err= 0: pid=116634: Wed May 15 01:11:53 2024 00:45:50.920 read: IOPS=1898, BW=14.8MiB/s (15.6MB/s)(74.2MiB/5001msec) 00:45:50.920 slat (nsec): min=6211, max=65937, avg=25282.06, stdev=8517.00 00:45:50.920 clat (usec): min=1934, max=9808, avg=4090.15, stdev=302.94 00:45:50.920 lat (usec): min=1954, max=9816, avg=4115.43, stdev=303.02 00:45:50.920 clat percentiles (usec): 00:45:50.920 | 1.00th=[ 3818], 5.00th=[ 3916], 10.00th=[ 3949], 20.00th=[ 3982], 00:45:50.920 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4080], 00:45:50.920 | 70.00th=[ 4113], 80.00th=[ 4146], 90.00th=[ 4228], 95.00th=[ 4359], 00:45:50.921 | 99.00th=[ 5473], 99.50th=[ 5997], 99.90th=[ 6521], 99.95th=[ 7046], 00:45:50.921 | 99.99th=[ 9765] 00:45:50.921 bw ( KiB/s): min=15104, max=15360, per=25.13%, avg=15274.67, stdev=90.51, samples=9 00:45:50.921 iops : min= 1888, max= 1920, avg=1909.33, stdev=11.31, samples=9 00:45:50.921 lat (msec) : 2=0.01%, 4=31.13%, 10=68.86% 00:45:50.921 cpu : usr=94.64%, sys=4.02%, ctx=14, majf=0, minf=10 00:45:50.921 IO depths : 1=11.7%, 2=24.7%, 4=50.2%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:50.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.921 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.921 issued rwts: total=9496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:50.921 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:50.921 00:45:50.921 Run status group 0 (all jobs): 00:45:50.921 READ: bw=59.4MiB/s (62.2MB/s), 14.8MiB/s-14.9MiB/s (15.5MB/s-15.6MB/s), io=297MiB (312MB), run=5001-5004msec 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:50.921 ************************************ 00:45:50.921 END TEST fio_dif_rand_params 00:45:50.921 ************************************ 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:50.921 00:45:50.921 real 0m23.801s 00:45:50.921 user 2m6.626s 00:45:50.921 sys 0m4.433s 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # xtrace_disable 00:45:50.921 01:11:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:50.921 01:11:53 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:50.921 01:11:53 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:45:50.921 01:11:53 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:45:50.921 01:11:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:50.921 ************************************ 00:45:50.921 START TEST fio_dif_digest 00:45:50.921 ************************************ 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # fio_dif_digest 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.921 bdev_null0 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.921 [2024-05-15 01:11:53.541775] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:50.921 { 00:45:50.921 "params": { 00:45:50.921 "name": "Nvme$subsystem", 00:45:50.921 "trtype": "$TEST_TRANSPORT", 00:45:50.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:50.921 "adrfam": "ipv4", 00:45:50.921 "trsvcid": "$NVMF_PORT", 00:45:50.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:50.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:50.921 "hdgst": ${hdgst:-false}, 00:45:50.921 "ddgst": ${ddgst:-false} 00:45:50.921 }, 00:45:50.921 "method": "bdev_nvme_attach_controller" 00:45:50.921 } 00:45:50.921 EOF 00:45:50.921 )") 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:45:50.921 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local sanitizers 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # shift 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local asan_lib= 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libasan 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:50.922 "params": { 00:45:50.922 "name": "Nvme0", 00:45:50.922 "trtype": "tcp", 00:45:50.922 "traddr": "10.0.0.2", 00:45:50.922 "adrfam": "ipv4", 00:45:50.922 "trsvcid": "4420", 00:45:50.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:50.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:50.922 "hdgst": true, 00:45:50.922 "ddgst": true 00:45:50.922 }, 00:45:50.922 "method": "bdev_nvme_attach_controller" 00:45:50.922 }' 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:45:50.922 01:11:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:50.922 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:50.922 ... 00:45:50.922 fio-3.35 00:45:50.922 Starting 3 threads 00:46:03.187 00:46:03.187 filename0: (groupid=0, jobs=1): err= 0: pid=116736: Wed May 15 01:12:04 2024 00:46:03.187 read: IOPS=178, BW=22.3MiB/s (23.3MB/s)(223MiB/10005msec) 00:46:03.187 slat (nsec): min=8010, max=66982, avg=21615.78, stdev=7017.36 00:46:03.187 clat (usec): min=9077, max=26676, avg=16825.23, stdev=2482.79 00:46:03.187 lat (usec): min=9092, max=26691, avg=16846.84, stdev=2484.30 00:46:03.187 clat percentiles (usec): 00:46:03.187 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[13435], 20.00th=[16188], 00:46:03.187 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:46:03.187 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18744], 95.00th=[19530], 00:46:03.187 | 99.00th=[23987], 99.50th=[25560], 99.90th=[26608], 99.95th=[26608], 00:46:03.187 | 99.99th=[26608] 00:46:03.187 bw ( KiB/s): min=19968, max=25344, per=29.18%, avg=22635.79, stdev=1473.86, samples=19 00:46:03.187 iops : min= 156, max= 198, avg=176.84, stdev=11.51, samples=19 00:46:03.187 lat (msec) : 10=1.52%, 20=94.55%, 50=3.93% 00:46:03.187 cpu : usr=94.64%, sys=4.06%, ctx=16, majf=0, minf=9 00:46:03.187 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:03.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.187 issued rwts: total=1781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:03.187 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:03.187 filename0: (groupid=0, jobs=1): err= 0: pid=116737: Wed May 15 01:12:04 2024 00:46:03.187 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(263MiB/10005msec) 00:46:03.187 slat (nsec): min=7932, max=95721, avg=18147.70, stdev=7476.76 00:46:03.187 clat (usec): min=7413, max=21336, avg=14267.24, stdev=2136.78 00:46:03.187 lat (usec): min=7427, max=21359, avg=14285.39, stdev=2137.53 00:46:03.187 clat percentiles (usec): 00:46:03.187 | 1.00th=[ 8356], 5.00th=[ 9110], 10.00th=[11469], 20.00th=[13304], 00:46:03.187 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14484], 60.00th=[14877], 00:46:03.187 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16450], 95.00th=[17171], 00:46:03.187 | 99.00th=[19006], 99.50th=[19792], 99.90th=[20579], 99.95th=[21103], 00:46:03.187 | 99.99th=[21365] 00:46:03.187 bw ( KiB/s): min=23808, max=30720, per=34.40%, avg=26677.89, stdev=1815.37, samples=19 00:46:03.187 iops : min= 186, max= 240, avg=208.42, stdev=14.18, samples=19 00:46:03.187 lat (msec) : 10=8.86%, 20=90.76%, 50=0.38% 00:46:03.187 cpu : usr=94.40%, sys=4.12%, ctx=20, majf=0, minf=9 00:46:03.187 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:03.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.187 issued rwts: total=2100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:03.187 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:03.187 filename0: (groupid=0, jobs=1): err= 0: pid=116738: Wed May 15 01:12:04 2024 00:46:03.187 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(273MiB/10008msec) 00:46:03.187 slat (usec): min=8, max=510, avg=19.37, stdev=12.49 00:46:03.187 clat (usec): min=9000, max=62243, avg=13727.53, stdev=6297.54 00:46:03.187 lat (usec): min=9012, max=62263, avg=13746.90, stdev=6297.55 00:46:03.187 clat percentiles (usec): 00:46:03.187 | 1.00th=[10421], 5.00th=[11207], 10.00th=[11469], 20.00th=[11863], 00:46:03.187 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:46:03.187 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13960], 95.00th=[16188], 00:46:03.187 | 99.00th=[53216], 99.50th=[54264], 99.90th=[60556], 99.95th=[61080], 00:46:03.187 | 99.99th=[62129] 00:46:03.187 bw ( KiB/s): min=23040, max=31232, per=36.34%, avg=28186.95, stdev=2624.46, samples=19 00:46:03.187 iops : min= 180, max= 244, avg=220.21, stdev=20.50, samples=19 00:46:03.187 lat (msec) : 10=0.14%, 20=97.30%, 50=0.23%, 100=2.34% 00:46:03.187 cpu : usr=93.03%, sys=5.33%, ctx=101, majf=0, minf=0 00:46:03.187 IO depths : 1=2.2%, 2=97.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:03.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:03.187 issued rwts: total=2183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:03.187 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:03.187 00:46:03.187 Run status group 0 (all jobs): 00:46:03.187 READ: bw=75.7MiB/s (79.4MB/s), 22.3MiB/s-27.3MiB/s (23.3MB/s-28.6MB/s), io=758MiB (795MB), run=10005-10008msec 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:03.187 00:46:03.187 real 0m11.049s 00:46:03.187 user 0m28.932s 00:46:03.187 sys 0m1.641s 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:46:03.187 ************************************ 00:46:03.187 END TEST fio_dif_digest 00:46:03.187 ************************************ 00:46:03.187 01:12:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:03.187 01:12:04 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:46:03.187 01:12:04 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:46:03.187 01:12:04 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:46:03.187 01:12:04 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:46:03.187 01:12:04 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:46:03.187 01:12:04 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:46:03.187 01:12:04 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:46:03.187 01:12:04 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:46:03.187 rmmod nvme_tcp 00:46:03.187 rmmod nvme_fabrics 00:46:03.187 rmmod nvme_keyring 00:46:03.187 01:12:04 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:46:03.187 01:12:04 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:46:03.187 01:12:04 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:46:03.187 01:12:04 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 115995 ']' 00:46:03.187 01:12:04 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 115995 00:46:03.187 01:12:04 nvmf_dif -- common/autotest_common.sh@947 -- # '[' -z 115995 ']' 00:46:03.187 01:12:04 nvmf_dif -- common/autotest_common.sh@951 -- # kill -0 115995 00:46:03.187 01:12:04 nvmf_dif -- common/autotest_common.sh@952 -- # uname 00:46:03.187 01:12:04 nvmf_dif -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:46:03.187 01:12:04 nvmf_dif -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 115995 00:46:03.187 killing process with pid 115995 00:46:03.187 01:12:04 nvmf_dif -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:46:03.187 01:12:04 nvmf_dif -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:46:03.187 01:12:04 nvmf_dif -- common/autotest_common.sh@965 -- # echo 'killing process with pid 115995' 00:46:03.187 01:12:04 nvmf_dif -- common/autotest_common.sh@966 -- # kill 115995 00:46:03.187 [2024-05-15 01:12:04.722807] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:46:03.187 01:12:04 nvmf_dif -- common/autotest_common.sh@971 -- # wait 115995 00:46:03.187 01:12:04 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:46:03.187 01:12:04 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:03.187 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:03.187 Waiting for block devices as requested 00:46:03.187 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:46:03.187 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:46:03.187 01:12:05 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:46:03.187 01:12:05 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:46:03.187 01:12:05 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:46:03.187 01:12:05 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:46:03.187 01:12:05 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:03.187 01:12:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:03.187 01:12:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:03.187 01:12:05 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:46:03.187 ************************************ 00:46:03.187 END TEST nvmf_dif 00:46:03.187 ************************************ 00:46:03.187 00:46:03.187 real 1m0.305s 00:46:03.187 user 3m52.089s 00:46:03.187 sys 0m14.310s 00:46:03.187 01:12:05 nvmf_dif -- common/autotest_common.sh@1123 -- # xtrace_disable 00:46:03.187 01:12:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:03.187 01:12:05 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:03.187 01:12:05 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:46:03.187 01:12:05 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:46:03.187 01:12:05 -- common/autotest_common.sh@10 -- # set +x 00:46:03.187 ************************************ 00:46:03.187 START TEST nvmf_abort_qd_sizes 00:46:03.187 ************************************ 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:03.187 * Looking for test storage... 00:46:03.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:03.187 01:12:05 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:46:03.188 Cannot find device "nvmf_tgt_br" 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:46:03.188 Cannot find device "nvmf_tgt_br2" 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:46:03.188 Cannot find device "nvmf_tgt_br" 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:46:03.188 Cannot find device "nvmf_tgt_br2" 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:03.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:03.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:46:03.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:03.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:46:03.188 00:46:03.188 --- 10.0.0.2 ping statistics --- 00:46:03.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:03.188 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:46:03.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:03.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:46:03.188 00:46:03.188 --- 10.0.0.3 ping statistics --- 00:46:03.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:03.188 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:03.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:03.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:46:03.188 00:46:03.188 --- 10.0.0.1 ping statistics --- 00:46:03.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:03.188 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:46:03.188 01:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:03.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:03.448 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:46:03.707 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:46:03.707 01:12:06 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:03.707 01:12:06 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:46:03.707 01:12:06 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:46:03.707 01:12:06 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:03.707 01:12:06 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:46:03.707 01:12:06 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:46:03.707 01:12:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:46:03.708 01:12:06 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:03.708 01:12:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@721 -- # xtrace_disable 00:46:03.708 01:12:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:03.708 01:12:06 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=117325 00:46:03.708 01:12:06 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 117325 00:46:03.708 01:12:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # '[' -z 117325 ']' 00:46:03.708 01:12:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:03.708 01:12:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local max_retries=100 00:46:03.708 01:12:06 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:46:03.708 01:12:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:03.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:03.708 01:12:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # xtrace_disable 00:46:03.708 01:12:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:03.708 [2024-05-15 01:12:06.888255] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:46:03.708 [2024-05-15 01:12:06.888367] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:03.965 [2024-05-15 01:12:07.026470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:03.965 [2024-05-15 01:12:07.125625] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:03.965 [2024-05-15 01:12:07.125691] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:03.965 [2024-05-15 01:12:07.125702] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:03.965 [2024-05-15 01:12:07.125711] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:03.965 [2024-05-15 01:12:07.125719] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:03.965 [2024-05-15 01:12:07.125805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:03.965 [2024-05-15 01:12:07.125896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:46:03.965 [2024-05-15 01:12:07.126815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:46:03.965 [2024-05-15 01:12:07.126828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@861 -- # return 0 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@727 -- # xtrace_disable 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:46:04.899 01:12:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:04.899 ************************************ 00:46:04.899 START TEST spdk_target_abort 00:46:04.899 ************************************ 00:46:04.899 01:12:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # spdk_target 00:46:04.899 01:12:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:46:04.899 01:12:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:46:04.899 01:12:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:04.899 01:12:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:04.899 spdk_targetn1 00:46:04.899 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:04.899 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:04.899 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:04.899 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:04.899 [2024-05-15 01:12:08.072059] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:04.899 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:04.899 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:46:04.899 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:04.899 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:04.899 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:04.900 [2024-05-15 01:12:08.104028] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:46:04.900 [2024-05-15 01:12:08.104352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:04.900 01:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:08.208 Initializing NVMe Controllers 00:46:08.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:08.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:08.208 Initialization complete. Launching workers. 00:46:08.208 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10127, failed: 0 00:46:08.208 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1099, failed to submit 9028 00:46:08.208 success 774, unsuccess 325, failed 0 00:46:08.208 01:12:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:08.208 01:12:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:11.545 Initializing NVMe Controllers 00:46:11.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:11.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:11.545 Initialization complete. Launching workers. 00:46:11.545 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5928, failed: 0 00:46:11.545 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1228, failed to submit 4700 00:46:11.545 success 265, unsuccess 963, failed 0 00:46:11.545 01:12:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:11.545 01:12:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:14.829 Initializing NVMe Controllers 00:46:14.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:14.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:14.829 Initialization complete. Launching workers. 00:46:14.830 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29385, failed: 0 00:46:14.830 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2592, failed to submit 26793 00:46:14.830 success 398, unsuccess 2194, failed 0 00:46:14.830 01:12:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:14.830 01:12:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:14.830 01:12:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:14.830 01:12:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:14.830 01:12:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:14.830 01:12:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:14.830 01:12:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:15.151 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:15.151 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 117325 00:46:15.151 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' -z 117325 ']' 00:46:15.151 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # kill -0 117325 00:46:15.151 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # uname 00:46:15.151 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:46:15.151 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 117325 00:46:15.410 killing process with pid 117325 00:46:15.411 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:46:15.411 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:46:15.411 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 117325' 00:46:15.411 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # kill 117325 00:46:15.411 [2024-05-15 01:12:18.451149] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:46:15.411 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # wait 117325 00:46:15.411 00:46:15.411 real 0m10.697s 00:46:15.411 user 0m43.731s 00:46:15.411 sys 0m1.796s 00:46:15.411 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:46:15.411 01:12:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:15.411 ************************************ 00:46:15.411 END TEST spdk_target_abort 00:46:15.411 ************************************ 00:46:15.669 01:12:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:15.669 01:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:46:15.669 01:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:46:15.669 01:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:15.669 ************************************ 00:46:15.669 START TEST kernel_target_abort 00:46:15.669 ************************************ 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # kernel_target 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:15.669 01:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:15.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:15.927 Waiting for block devices as requested 00:46:15.927 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:46:16.186 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:46:16.186 No valid GPT data, bailing 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n2 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:46:16.186 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:46:16.445 No valid GPT data, bailing 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n3 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:46:16.445 No valid GPT data, bailing 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:46:16.445 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:46:16.445 No valid GPT data, bailing 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 --hostid=805558a3-5ce0-4866-80b9-32ca60bbceb5 -a 10.0.0.1 -t tcp -s 4420 00:46:16.446 00:46:16.446 Discovery Log Number of Records 2, Generation counter 2 00:46:16.446 =====Discovery Log Entry 0====== 00:46:16.446 trtype: tcp 00:46:16.446 adrfam: ipv4 00:46:16.446 subtype: current discovery subsystem 00:46:16.446 treq: not specified, sq flow control disable supported 00:46:16.446 portid: 1 00:46:16.446 trsvcid: 4420 00:46:16.446 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:16.446 traddr: 10.0.0.1 00:46:16.446 eflags: none 00:46:16.446 sectype: none 00:46:16.446 =====Discovery Log Entry 1====== 00:46:16.446 trtype: tcp 00:46:16.446 adrfam: ipv4 00:46:16.446 subtype: nvme subsystem 00:46:16.446 treq: not specified, sq flow control disable supported 00:46:16.446 portid: 1 00:46:16.446 trsvcid: 4420 00:46:16.446 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:16.446 traddr: 10.0.0.1 00:46:16.446 eflags: none 00:46:16.446 sectype: none 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:16.446 01:12:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:19.728 Initializing NVMe Controllers 00:46:19.728 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:19.728 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:19.728 Initialization complete. Launching workers. 00:46:19.728 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35713, failed: 0 00:46:19.728 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35713, failed to submit 0 00:46:19.728 success 0, unsuccess 35713, failed 0 00:46:19.728 01:12:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:19.728 01:12:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:23.010 Initializing NVMe Controllers 00:46:23.010 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:23.010 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:23.010 Initialization complete. Launching workers. 00:46:23.010 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70893, failed: 0 00:46:23.010 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30078, failed to submit 40815 00:46:23.010 success 0, unsuccess 30078, failed 0 00:46:23.010 01:12:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:23.010 01:12:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:26.292 Initializing NVMe Controllers 00:46:26.292 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:26.292 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:26.292 Initialization complete. Launching workers. 00:46:26.292 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81562, failed: 0 00:46:26.292 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20354, failed to submit 61208 00:46:26.292 success 0, unsuccess 20354, failed 0 00:46:26.292 01:12:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:26.292 01:12:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:26.292 01:12:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:46:26.292 01:12:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:26.293 01:12:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:26.293 01:12:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:26.293 01:12:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:26.293 01:12:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:46:26.293 01:12:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:46:26.293 01:12:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:26.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:27.792 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:46:27.792 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:46:27.792 00:46:27.792 real 0m12.264s 00:46:27.792 user 0m6.348s 00:46:27.792 sys 0m3.210s 00:46:27.792 01:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:46:27.792 ************************************ 00:46:27.792 END TEST kernel_target_abort 00:46:27.792 01:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:27.792 ************************************ 00:46:27.792 01:12:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:27.792 01:12:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:27.792 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:46:27.792 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:46:28.050 rmmod nvme_tcp 00:46:28.050 rmmod nvme_fabrics 00:46:28.050 rmmod nvme_keyring 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:46:28.050 Process with pid 117325 is not found 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 117325 ']' 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 117325 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@947 -- # '[' -z 117325 ']' 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@951 -- # kill -0 117325 00:46:28.050 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (117325) - No such process 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@974 -- # echo 'Process with pid 117325 is not found' 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:46:28.050 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:28.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:28.308 Waiting for block devices as requested 00:46:28.308 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:46:28.565 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:46:28.565 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:46:28.565 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:46:28.565 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:46:28.565 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:46:28.565 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:28.565 01:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:28.565 01:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:28.566 01:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:46:28.566 ************************************ 00:46:28.566 END TEST nvmf_abort_qd_sizes 00:46:28.566 ************************************ 00:46:28.566 00:46:28.566 real 0m26.149s 00:46:28.566 user 0m51.224s 00:46:28.566 sys 0m6.394s 00:46:28.566 01:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # xtrace_disable 00:46:28.566 01:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:28.566 01:12:31 -- spdk/autotest.sh@291 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:46:28.566 01:12:31 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:46:28.566 01:12:31 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:46:28.566 01:12:31 -- common/autotest_common.sh@10 -- # set +x 00:46:28.566 ************************************ 00:46:28.566 START TEST keyring_file 00:46:28.566 ************************************ 00:46:28.566 01:12:31 keyring_file -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:46:28.566 * Looking for test storage... 00:46:28.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:46:28.566 01:12:31 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:46:28.566 01:12:31 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:28.566 01:12:31 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:46:28.566 01:12:31 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:28.566 01:12:31 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:805558a3-5ce0-4866-80b9-32ca60bbceb5 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=805558a3-5ce0-4866-80b9-32ca60bbceb5 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:28.824 01:12:31 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:28.824 01:12:31 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:28.824 01:12:31 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:28.824 01:12:31 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:28.824 01:12:31 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:28.824 01:12:31 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:28.824 01:12:31 keyring_file -- paths/export.sh@5 -- # export PATH 00:46:28.824 01:12:31 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@47 -- # : 0 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:46:28.824 01:12:31 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:28.824 01:12:31 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:28.824 01:12:31 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:28.824 01:12:31 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:46:28.824 01:12:31 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:46:28.824 01:12:31 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:46:28.824 01:12:31 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:28.824 01:12:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:28.824 01:12:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:28.824 01:12:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:28.824 01:12:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:28.824 01:12:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:28.824 01:12:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gFp2oeWZsq 00:46:28.824 01:12:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:46:28.824 01:12:31 keyring_file -- nvmf/common.sh@705 -- # python - 00:46:28.824 01:12:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gFp2oeWZsq 00:46:28.824 01:12:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gFp2oeWZsq 00:46:28.824 01:12:31 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.gFp2oeWZsq 00:46:28.825 01:12:31 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:46:28.825 01:12:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:28.825 01:12:31 keyring_file -- keyring/common.sh@17 -- # name=key1 00:46:28.825 01:12:31 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:28.825 01:12:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:28.825 01:12:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:28.825 01:12:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.10lmWYVNww 00:46:28.825 01:12:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:28.825 01:12:31 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:28.825 01:12:31 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:46:28.825 01:12:31 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:46:28.825 01:12:31 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:46:28.825 01:12:31 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:46:28.825 01:12:31 keyring_file -- nvmf/common.sh@705 -- # python - 00:46:28.825 01:12:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.10lmWYVNww 00:46:28.825 01:12:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.10lmWYVNww 00:46:28.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:28.825 01:12:31 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.10lmWYVNww 00:46:28.825 01:12:31 keyring_file -- keyring/file.sh@30 -- # tgtpid=118195 00:46:28.825 01:12:31 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:28.825 01:12:31 keyring_file -- keyring/file.sh@32 -- # waitforlisten 118195 00:46:28.825 01:12:31 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 118195 ']' 00:46:28.825 01:12:31 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:28.825 01:12:31 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:46:28.825 01:12:31 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:28.825 01:12:31 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:46:28.825 01:12:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:28.825 [2024-05-15 01:12:32.059493] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:46:28.825 [2024-05-15 01:12:32.059930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118195 ] 00:46:29.086 [2024-05-15 01:12:32.204186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:29.086 [2024-05-15 01:12:32.297947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:30.025 01:12:33 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:46:30.025 01:12:33 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:46:30.025 01:12:33 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:46:30.025 01:12:33 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:30.025 01:12:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:30.026 [2024-05-15 01:12:33.100887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:30.026 null0 00:46:30.026 [2024-05-15 01:12:33.132814] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:46:30.026 [2024-05-15 01:12:33.133036] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:30.026 [2024-05-15 01:12:33.133238] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:30.026 [2024-05-15 01:12:33.140843] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:30.026 01:12:33 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:30.026 [2024-05-15 01:12:33.152843] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:46:30.026 2024/05/15 01:12:33 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:46:30.026 request: 00:46:30.026 { 00:46:30.026 "method": "nvmf_subsystem_add_listener", 00:46:30.026 "params": { 00:46:30.026 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:46:30.026 "secure_channel": false, 00:46:30.026 "listen_address": { 00:46:30.026 "trtype": "tcp", 00:46:30.026 "traddr": "127.0.0.1", 00:46:30.026 "trsvcid": "4420" 00:46:30.026 } 00:46:30.026 } 00:46:30.026 } 00:46:30.026 Got JSON-RPC error response 00:46:30.026 GoRPCClient: error on JSON-RPC call 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:30.026 01:12:33 keyring_file -- keyring/file.sh@46 -- # bperfpid=118230 00:46:30.026 01:12:33 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:46:30.026 01:12:33 keyring_file -- keyring/file.sh@48 -- # waitforlisten 118230 /var/tmp/bperf.sock 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 118230 ']' 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:30.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:46:30.026 01:12:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:30.026 [2024-05-15 01:12:33.223823] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:46:30.026 [2024-05-15 01:12:33.224091] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118230 ] 00:46:30.285 [2024-05-15 01:12:33.358832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:30.285 [2024-05-15 01:12:33.451668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:31.217 01:12:34 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:46:31.217 01:12:34 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:46:31.217 01:12:34 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gFp2oeWZsq 00:46:31.217 01:12:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gFp2oeWZsq 00:46:31.474 01:12:34 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.10lmWYVNww 00:46:31.474 01:12:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.10lmWYVNww 00:46:31.732 01:12:34 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:46:31.732 01:12:34 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:46:31.732 01:12:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:31.732 01:12:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:31.732 01:12:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:31.990 01:12:35 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.gFp2oeWZsq == \/\t\m\p\/\t\m\p\.\g\F\p\2\o\e\W\Z\s\q ]] 00:46:31.990 01:12:35 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:46:31.990 01:12:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:46:31.990 01:12:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:31.990 01:12:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:31.990 01:12:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:32.249 01:12:35 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.10lmWYVNww == \/\t\m\p\/\t\m\p\.\1\0\l\m\W\Y\V\N\w\w ]] 00:46:32.249 01:12:35 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:46:32.249 01:12:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:32.249 01:12:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:32.249 01:12:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:32.249 01:12:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:32.249 01:12:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:32.507 01:12:35 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:46:32.507 01:12:35 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:46:32.507 01:12:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:32.507 01:12:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:32.507 01:12:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:32.507 01:12:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:32.507 01:12:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:32.764 01:12:35 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:46:32.764 01:12:35 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:32.764 01:12:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:33.021 [2024-05-15 01:12:36.105492] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:33.021 nvme0n1 00:46:33.021 01:12:36 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:46:33.021 01:12:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:33.021 01:12:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:33.021 01:12:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:33.021 01:12:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:33.021 01:12:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:33.279 01:12:36 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:46:33.279 01:12:36 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:46:33.279 01:12:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:33.279 01:12:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:33.279 01:12:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:33.279 01:12:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:33.279 01:12:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:33.537 01:12:36 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:46:33.537 01:12:36 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:33.794 Running I/O for 1 seconds... 00:46:34.730 00:46:34.730 Latency(us) 00:46:34.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:34.730 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:46:34.730 nvme0n1 : 1.01 11213.09 43.80 0.00 0.00 11379.58 5600.35 22282.24 00:46:34.730 =================================================================================================================== 00:46:34.730 Total : 11213.09 43.80 0.00 0.00 11379.58 5600.35 22282.24 00:46:34.730 0 00:46:34.730 01:12:37 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:34.730 01:12:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:34.988 01:12:38 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:46:34.988 01:12:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:34.988 01:12:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:34.988 01:12:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:34.988 01:12:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:34.988 01:12:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:35.246 01:12:38 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:46:35.246 01:12:38 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:46:35.246 01:12:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:35.246 01:12:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:35.246 01:12:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:35.246 01:12:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:35.246 01:12:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:35.505 01:12:38 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:46:35.505 01:12:38 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:35.505 01:12:38 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:46:35.505 01:12:38 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:35.505 01:12:38 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:46:35.505 01:12:38 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:35.505 01:12:38 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:46:35.505 01:12:38 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:35.505 01:12:38 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:35.505 01:12:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:35.764 [2024-05-15 01:12:38.938115] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:35.764 [2024-05-15 01:12:38.938253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7ff10 (107): Transport endpoint is not connected 00:46:35.764 [2024-05-15 01:12:38.939244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7ff10 (9): Bad file descriptor 00:46:35.764 [2024-05-15 01:12:38.940240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:46:35.764 [2024-05-15 01:12:38.940259] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:35.764 [2024-05-15 01:12:38.940270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:46:35.764 2024/05/15 01:12:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:46:35.764 request: 00:46:35.764 { 00:46:35.764 "method": "bdev_nvme_attach_controller", 00:46:35.764 "params": { 00:46:35.764 "name": "nvme0", 00:46:35.764 "trtype": "tcp", 00:46:35.764 "traddr": "127.0.0.1", 00:46:35.764 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:35.764 "adrfam": "ipv4", 00:46:35.764 "trsvcid": "4420", 00:46:35.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:35.764 "psk": "key1" 00:46:35.764 } 00:46:35.764 } 00:46:35.764 Got JSON-RPC error response 00:46:35.764 GoRPCClient: error on JSON-RPC call 00:46:35.764 01:12:38 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:46:35.764 01:12:38 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:35.764 01:12:38 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:35.764 01:12:38 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:35.764 01:12:38 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:46:35.764 01:12:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:35.764 01:12:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:35.764 01:12:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:35.764 01:12:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:35.764 01:12:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:36.025 01:12:39 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:46:36.025 01:12:39 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:46:36.025 01:12:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:36.025 01:12:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:36.025 01:12:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:36.025 01:12:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:36.025 01:12:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:36.314 01:12:39 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:46:36.314 01:12:39 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:46:36.314 01:12:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:36.573 01:12:39 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:46:36.573 01:12:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:46:36.831 01:12:40 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:46:36.831 01:12:40 keyring_file -- keyring/file.sh@77 -- # jq length 00:46:36.831 01:12:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:37.399 01:12:40 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:46:37.399 01:12:40 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.gFp2oeWZsq 00:46:37.399 01:12:40 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.gFp2oeWZsq 00:46:37.399 01:12:40 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:46:37.399 01:12:40 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.gFp2oeWZsq 00:46:37.399 01:12:40 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:46:37.399 01:12:40 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:37.399 01:12:40 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:46:37.399 01:12:40 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:37.399 01:12:40 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gFp2oeWZsq 00:46:37.399 01:12:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gFp2oeWZsq 00:46:37.399 [2024-05-15 01:12:40.621680] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gFp2oeWZsq': 0100660 00:46:37.399 [2024-05-15 01:12:40.621722] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:37.399 2024/05/15 01:12:40 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.gFp2oeWZsq], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:46:37.399 request: 00:46:37.399 { 00:46:37.399 "method": "keyring_file_add_key", 00:46:37.399 "params": { 00:46:37.399 "name": "key0", 00:46:37.399 "path": "/tmp/tmp.gFp2oeWZsq" 00:46:37.399 } 00:46:37.399 } 00:46:37.399 Got JSON-RPC error response 00:46:37.399 GoRPCClient: error on JSON-RPC call 00:46:37.399 01:12:40 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:46:37.399 01:12:40 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:37.399 01:12:40 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:37.399 01:12:40 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:37.399 01:12:40 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.gFp2oeWZsq 00:46:37.399 01:12:40 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gFp2oeWZsq 00:46:37.399 01:12:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gFp2oeWZsq 00:46:37.965 01:12:40 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.gFp2oeWZsq 00:46:37.965 01:12:40 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:46:37.965 01:12:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:37.965 01:12:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:37.965 01:12:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:37.965 01:12:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:37.965 01:12:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:38.223 01:12:41 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:46:38.223 01:12:41 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:38.223 01:12:41 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:46:38.223 01:12:41 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:38.223 01:12:41 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:46:38.223 01:12:41 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:38.223 01:12:41 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:46:38.223 01:12:41 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:38.223 01:12:41 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:38.223 01:12:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:38.481 [2024-05-15 01:12:41.565862] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.gFp2oeWZsq': No such file or directory 00:46:38.481 [2024-05-15 01:12:41.565915] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:46:38.481 [2024-05-15 01:12:41.565954] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:46:38.481 [2024-05-15 01:12:41.565963] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:46:38.481 [2024-05-15 01:12:41.565972] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:46:38.481 2024/05/15 01:12:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:46:38.481 request: 00:46:38.481 { 00:46:38.481 "method": "bdev_nvme_attach_controller", 00:46:38.481 "params": { 00:46:38.481 "name": "nvme0", 00:46:38.481 "trtype": "tcp", 00:46:38.481 "traddr": "127.0.0.1", 00:46:38.481 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:38.481 "adrfam": "ipv4", 00:46:38.481 "trsvcid": "4420", 00:46:38.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:38.481 "psk": "key0" 00:46:38.481 } 00:46:38.481 } 00:46:38.481 Got JSON-RPC error response 00:46:38.481 GoRPCClient: error on JSON-RPC call 00:46:38.481 01:12:41 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:46:38.481 01:12:41 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:38.481 01:12:41 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:38.481 01:12:41 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:38.481 01:12:41 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:46:38.481 01:12:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:38.739 01:12:41 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:38.739 01:12:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:38.739 01:12:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:38.739 01:12:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:38.739 01:12:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:38.739 01:12:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:38.739 01:12:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8ZwLjG7UUj 00:46:38.739 01:12:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:38.739 01:12:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:38.739 01:12:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:46:38.739 01:12:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:46:38.739 01:12:41 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:46:38.739 01:12:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:46:38.739 01:12:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:46:38.739 01:12:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8ZwLjG7UUj 00:46:38.739 01:12:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8ZwLjG7UUj 00:46:38.739 01:12:41 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.8ZwLjG7UUj 00:46:38.739 01:12:41 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8ZwLjG7UUj 00:46:38.739 01:12:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8ZwLjG7UUj 00:46:38.997 01:12:42 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:38.997 01:12:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:39.256 nvme0n1 00:46:39.514 01:12:42 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:46:39.514 01:12:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:39.514 01:12:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:39.514 01:12:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:39.514 01:12:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:39.514 01:12:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:39.772 01:12:42 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:46:39.772 01:12:42 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:46:39.772 01:12:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:40.031 01:12:43 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:46:40.031 01:12:43 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:46:40.031 01:12:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:40.031 01:12:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:40.031 01:12:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:40.289 01:12:43 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:46:40.289 01:12:43 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:46:40.289 01:12:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:40.289 01:12:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:40.289 01:12:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:40.289 01:12:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:40.289 01:12:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:40.289 01:12:43 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:46:40.289 01:12:43 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:40.289 01:12:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:40.855 01:12:43 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:46:40.855 01:12:43 keyring_file -- keyring/file.sh@104 -- # jq length 00:46:40.855 01:12:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:40.855 01:12:44 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:46:40.855 01:12:44 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8ZwLjG7UUj 00:46:40.855 01:12:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8ZwLjG7UUj 00:46:41.114 01:12:44 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.10lmWYVNww 00:46:41.114 01:12:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.10lmWYVNww 00:46:41.372 01:12:44 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:41.372 01:12:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:41.630 nvme0n1 00:46:41.888 01:12:44 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:46:41.888 01:12:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:46:42.146 01:12:45 keyring_file -- keyring/file.sh@112 -- # config='{ 00:46:42.146 "subsystems": [ 00:46:42.146 { 00:46:42.146 "subsystem": "keyring", 00:46:42.146 "config": [ 00:46:42.146 { 00:46:42.146 "method": "keyring_file_add_key", 00:46:42.146 "params": { 00:46:42.146 "name": "key0", 00:46:42.146 "path": "/tmp/tmp.8ZwLjG7UUj" 00:46:42.146 } 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "method": "keyring_file_add_key", 00:46:42.146 "params": { 00:46:42.146 "name": "key1", 00:46:42.146 "path": "/tmp/tmp.10lmWYVNww" 00:46:42.146 } 00:46:42.146 } 00:46:42.146 ] 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "subsystem": "iobuf", 00:46:42.146 "config": [ 00:46:42.146 { 00:46:42.146 "method": "iobuf_set_options", 00:46:42.146 "params": { 00:46:42.146 "large_bufsize": 135168, 00:46:42.146 "large_pool_count": 1024, 00:46:42.146 "small_bufsize": 8192, 00:46:42.146 "small_pool_count": 8192 00:46:42.146 } 00:46:42.146 } 00:46:42.146 ] 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "subsystem": "sock", 00:46:42.146 "config": [ 00:46:42.146 { 00:46:42.146 "method": "sock_impl_set_options", 00:46:42.146 "params": { 00:46:42.146 "enable_ktls": false, 00:46:42.146 "enable_placement_id": 0, 00:46:42.146 "enable_quickack": false, 00:46:42.146 "enable_recv_pipe": true, 00:46:42.146 "enable_zerocopy_send_client": false, 00:46:42.146 "enable_zerocopy_send_server": true, 00:46:42.146 "impl_name": "posix", 00:46:42.146 "recv_buf_size": 2097152, 00:46:42.146 "send_buf_size": 2097152, 00:46:42.146 "tls_version": 0, 00:46:42.146 "zerocopy_threshold": 0 00:46:42.146 } 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "method": "sock_impl_set_options", 00:46:42.146 "params": { 00:46:42.146 "enable_ktls": false, 00:46:42.146 "enable_placement_id": 0, 00:46:42.146 "enable_quickack": false, 00:46:42.146 "enable_recv_pipe": true, 00:46:42.146 "enable_zerocopy_send_client": false, 00:46:42.146 "enable_zerocopy_send_server": true, 00:46:42.146 "impl_name": "ssl", 00:46:42.146 "recv_buf_size": 4096, 00:46:42.146 "send_buf_size": 4096, 00:46:42.146 "tls_version": 0, 00:46:42.146 "zerocopy_threshold": 0 00:46:42.146 } 00:46:42.146 } 00:46:42.146 ] 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "subsystem": "vmd", 00:46:42.146 "config": [] 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "subsystem": "accel", 00:46:42.146 "config": [ 00:46:42.146 { 00:46:42.146 "method": "accel_set_options", 00:46:42.146 "params": { 00:46:42.146 "buf_count": 2048, 00:46:42.146 "large_cache_size": 16, 00:46:42.146 "sequence_count": 2048, 00:46:42.146 "small_cache_size": 128, 00:46:42.146 "task_count": 2048 00:46:42.146 } 00:46:42.146 } 00:46:42.146 ] 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "subsystem": "bdev", 00:46:42.146 "config": [ 00:46:42.146 { 00:46:42.146 "method": "bdev_set_options", 00:46:42.146 "params": { 00:46:42.146 "bdev_auto_examine": true, 00:46:42.146 "bdev_io_cache_size": 256, 00:46:42.146 "bdev_io_pool_size": 65535, 00:46:42.146 "iobuf_large_cache_size": 16, 00:46:42.146 "iobuf_small_cache_size": 128 00:46:42.146 } 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "method": "bdev_raid_set_options", 00:46:42.146 "params": { 00:46:42.146 "process_window_size_kb": 1024 00:46:42.146 } 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "method": "bdev_iscsi_set_options", 00:46:42.146 "params": { 00:46:42.146 "timeout_sec": 30 00:46:42.146 } 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "method": "bdev_nvme_set_options", 00:46:42.146 "params": { 00:46:42.146 "action_on_timeout": "none", 00:46:42.146 "allow_accel_sequence": false, 00:46:42.146 "arbitration_burst": 0, 00:46:42.146 "bdev_retry_count": 3, 00:46:42.146 "ctrlr_loss_timeout_sec": 0, 00:46:42.146 "delay_cmd_submit": true, 00:46:42.146 "dhchap_dhgroups": [ 00:46:42.146 "null", 00:46:42.146 "ffdhe2048", 00:46:42.146 "ffdhe3072", 00:46:42.146 "ffdhe4096", 00:46:42.146 "ffdhe6144", 00:46:42.146 "ffdhe8192" 00:46:42.146 ], 00:46:42.146 "dhchap_digests": [ 00:46:42.146 "sha256", 00:46:42.146 "sha384", 00:46:42.146 "sha512" 00:46:42.146 ], 00:46:42.146 "disable_auto_failback": false, 00:46:42.146 "fast_io_fail_timeout_sec": 0, 00:46:42.146 "generate_uuids": false, 00:46:42.146 "high_priority_weight": 0, 00:46:42.146 "io_path_stat": false, 00:46:42.146 "io_queue_requests": 512, 00:46:42.146 "keep_alive_timeout_ms": 10000, 00:46:42.146 "low_priority_weight": 0, 00:46:42.146 "medium_priority_weight": 0, 00:46:42.146 "nvme_adminq_poll_period_us": 10000, 00:46:42.146 "nvme_error_stat": false, 00:46:42.146 "nvme_ioq_poll_period_us": 0, 00:46:42.146 "rdma_cm_event_timeout_ms": 0, 00:46:42.146 "rdma_max_cq_size": 0, 00:46:42.146 "rdma_srq_size": 0, 00:46:42.146 "reconnect_delay_sec": 0, 00:46:42.146 "timeout_admin_us": 0, 00:46:42.146 "timeout_us": 0, 00:46:42.146 "transport_ack_timeout": 0, 00:46:42.146 "transport_retry_count": 4, 00:46:42.146 "transport_tos": 0 00:46:42.146 } 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "method": "bdev_nvme_attach_controller", 00:46:42.146 "params": { 00:46:42.146 "adrfam": "IPv4", 00:46:42.146 "ctrlr_loss_timeout_sec": 0, 00:46:42.146 "ddgst": false, 00:46:42.146 "fast_io_fail_timeout_sec": 0, 00:46:42.146 "hdgst": false, 00:46:42.146 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:42.146 "name": "nvme0", 00:46:42.146 "prchk_guard": false, 00:46:42.146 "prchk_reftag": false, 00:46:42.146 "psk": "key0", 00:46:42.146 "reconnect_delay_sec": 0, 00:46:42.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:42.146 "traddr": "127.0.0.1", 00:46:42.146 "trsvcid": "4420", 00:46:42.146 "trtype": "TCP" 00:46:42.146 } 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "method": "bdev_nvme_set_hotplug", 00:46:42.146 "params": { 00:46:42.146 "enable": false, 00:46:42.146 "period_us": 100000 00:46:42.146 } 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "method": "bdev_wait_for_examine" 00:46:42.146 } 00:46:42.146 ] 00:46:42.146 }, 00:46:42.146 { 00:46:42.146 "subsystem": "nbd", 00:46:42.146 "config": [] 00:46:42.146 } 00:46:42.146 ] 00:46:42.146 }' 00:46:42.146 01:12:45 keyring_file -- keyring/file.sh@114 -- # killprocess 118230 00:46:42.146 01:12:45 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 118230 ']' 00:46:42.146 01:12:45 keyring_file -- common/autotest_common.sh@951 -- # kill -0 118230 00:46:42.146 01:12:45 keyring_file -- common/autotest_common.sh@952 -- # uname 00:46:42.146 01:12:45 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:46:42.146 01:12:45 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 118230 00:46:42.146 killing process with pid 118230 00:46:42.146 Received shutdown signal, test time was about 1.000000 seconds 00:46:42.146 00:46:42.146 Latency(us) 00:46:42.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:42.146 =================================================================================================================== 00:46:42.146 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:42.146 01:12:45 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:46:42.147 01:12:45 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:46:42.147 01:12:45 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 118230' 00:46:42.147 01:12:45 keyring_file -- common/autotest_common.sh@966 -- # kill 118230 00:46:42.147 01:12:45 keyring_file -- common/autotest_common.sh@971 -- # wait 118230 00:46:42.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:42.404 01:12:45 keyring_file -- keyring/file.sh@117 -- # bperfpid=118702 00:46:42.404 01:12:45 keyring_file -- keyring/file.sh@119 -- # waitforlisten 118702 /var/tmp/bperf.sock 00:46:42.404 01:12:45 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 118702 ']' 00:46:42.404 01:12:45 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:46:42.404 "subsystems": [ 00:46:42.404 { 00:46:42.404 "subsystem": "keyring", 00:46:42.404 "config": [ 00:46:42.404 { 00:46:42.404 "method": "keyring_file_add_key", 00:46:42.404 "params": { 00:46:42.404 "name": "key0", 00:46:42.404 "path": "/tmp/tmp.8ZwLjG7UUj" 00:46:42.404 } 00:46:42.404 }, 00:46:42.404 { 00:46:42.404 "method": "keyring_file_add_key", 00:46:42.404 "params": { 00:46:42.404 "name": "key1", 00:46:42.404 "path": "/tmp/tmp.10lmWYVNww" 00:46:42.404 } 00:46:42.404 } 00:46:42.404 ] 00:46:42.404 }, 00:46:42.404 { 00:46:42.404 "subsystem": "iobuf", 00:46:42.404 "config": [ 00:46:42.404 { 00:46:42.404 "method": "iobuf_set_options", 00:46:42.404 "params": { 00:46:42.404 "large_bufsize": 135168, 00:46:42.404 "large_pool_count": 1024, 00:46:42.404 "small_bufsize": 8192, 00:46:42.404 "small_pool_count": 8192 00:46:42.404 } 00:46:42.404 } 00:46:42.404 ] 00:46:42.404 }, 00:46:42.404 { 00:46:42.404 "subsystem": "sock", 00:46:42.404 "config": [ 00:46:42.404 { 00:46:42.404 "method": "sock_impl_set_options", 00:46:42.404 "params": { 00:46:42.404 "enable_ktls": false, 00:46:42.404 "enable_placement_id": 0, 00:46:42.404 "enable_quickack": false, 00:46:42.404 "enable_recv_pipe": true, 00:46:42.404 "enable_zerocopy_send_client": false, 00:46:42.404 "enable_zerocopy_send_server": true, 00:46:42.404 "impl_name": "posix", 00:46:42.404 "recv_buf_size": 2097152, 00:46:42.404 "send_buf_size": 2097152, 00:46:42.404 "tls_version": 0, 00:46:42.404 "zerocopy_threshold": 0 00:46:42.404 } 00:46:42.404 }, 00:46:42.404 { 00:46:42.404 "method": "sock_impl_set_options", 00:46:42.404 "params": { 00:46:42.404 "enable_ktls": false, 00:46:42.404 "enable_placement_id": 0, 00:46:42.404 "enable_quickack": false, 00:46:42.404 "enable_recv_pipe": true, 00:46:42.404 "enable_zerocopy_send_client": false, 00:46:42.404 "enable_zerocopy_send_server": true, 00:46:42.404 "impl_name": "ssl", 00:46:42.404 "recv_buf_size": 4096, 00:46:42.404 "send_buf_size": 4096, 00:46:42.404 "tls_version": 0, 00:46:42.404 "zerocopy_threshold": 0 00:46:42.404 } 00:46:42.404 } 00:46:42.404 ] 00:46:42.404 }, 00:46:42.404 { 00:46:42.404 "subsystem": "vmd", 00:46:42.404 "config": [] 00:46:42.404 }, 00:46:42.404 { 00:46:42.404 "subsystem": "accel", 00:46:42.404 "config": [ 00:46:42.404 { 00:46:42.404 "method": "accel_set_options", 00:46:42.404 "params": { 00:46:42.404 "buf_count": 2048, 00:46:42.404 "large_cache_size": 16, 00:46:42.404 "sequence_count": 2048, 00:46:42.404 "small_cache_size": 128, 00:46:42.404 "task_count": 2048 00:46:42.404 } 00:46:42.404 } 00:46:42.404 ] 00:46:42.404 }, 00:46:42.404 { 00:46:42.404 "subsystem": "bdev", 00:46:42.404 "config": [ 00:46:42.404 { 00:46:42.404 "method": "bdev_set_options", 00:46:42.404 "params": { 00:46:42.404 "bdev_auto_examine": true, 00:46:42.404 "bdev_io_cache_size": 256, 00:46:42.404 "bdev_io_pool_size": 65535, 00:46:42.404 "iobuf_large_cache_size": 16, 00:46:42.404 "iobuf_small_cache_size": 128 00:46:42.404 } 00:46:42.404 }, 00:46:42.404 { 00:46:42.404 "method": "bdev_raid_set_options", 00:46:42.404 "params": { 00:46:42.404 "process_window_size_kb": 1024 00:46:42.404 } 00:46:42.404 }, 00:46:42.404 { 00:46:42.404 "method": "bdev_iscsi_set_options", 00:46:42.404 "params": { 00:46:42.404 "timeout_sec": 30 00:46:42.404 } 00:46:42.404 }, 00:46:42.404 { 00:46:42.404 "method": "bdev_nvme_set_options", 00:46:42.404 "params": { 00:46:42.404 "action_on_timeout": "none", 00:46:42.404 "allow_accel_sequence": false, 00:46:42.404 "arbitration_burst": 0, 00:46:42.404 "bdev_retry_count": 3, 00:46:42.404 "ctrlr_loss_timeout_sec": 0, 00:46:42.404 "delay_cmd_submit": true, 00:46:42.404 "dhchap_dhgroups": [ 00:46:42.404 "null", 00:46:42.404 "ffdhe2048", 00:46:42.404 "ffdhe3072", 00:46:42.404 "ffdhe4096", 00:46:42.404 "ffdhe6144", 00:46:42.404 "ffdhe8192" 00:46:42.404 ], 00:46:42.404 "dhchap_digests": [ 00:46:42.404 "sha256", 00:46:42.404 "sha384", 00:46:42.404 "sha512" 00:46:42.404 ], 00:46:42.404 "disable_auto_failback": false, 00:46:42.404 "fast_io_fail_timeout_sec": 0, 00:46:42.404 "generate_uuids": false, 00:46:42.404 "high_priority_weight": 0, 00:46:42.404 "io_path_stat": false, 00:46:42.404 "io_queue_requests": 512, 00:46:42.404 "keep_alive_timeout_ms": 10000, 00:46:42.404 "low_priority_weight": 0, 00:46:42.404 "medium_priority_weight": 0, 00:46:42.404 "nvme_adminq_poll_period_us": 10000, 00:46:42.404 "nvme_error_stat": false, 00:46:42.404 "nvme_ioq_poll_period_us": 0, 00:46:42.404 "rdma_cm_event_timeout_ms": 0, 00:46:42.404 "rdma_max_cq_size": 0, 00:46:42.404 "rdma_srq_size": 0, 00:46:42.404 "reconnect_delay_sec": 0, 00:46:42.404 "timeout_admin_us": 0, 00:46:42.404 "timeout_us": 0, 00:46:42.404 "transport_ack_timeout": 0, 00:46:42.404 "transport_retry_count": 4, 00:46:42.404 "transport_tos": 0 00:46:42.404 } 00:46:42.404 }, 00:46:42.404 { 00:46:42.404 "method": "bdev_nvme_attach_controller", 00:46:42.404 "params": { 00:46:42.404 "adrfam": "IPv4", 00:46:42.404 "ctrlr_loss_timeout_sec": 0, 00:46:42.404 "ddgst": false, 00:46:42.404 "fast_io_fail_timeout_sec": 0, 00:46:42.404 "hdgst": false, 00:46:42.404 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:42.404 "name": "nvme0", 00:46:42.404 "prchk_guard": false, 00:46:42.404 "prchk_reftag": false, 00:46:42.404 "psk": "key0", 00:46:42.404 "reconnect_delay_sec": 0, 00:46:42.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:42.405 "traddr": "127.0.0.1", 00:46:42.405 "trsvcid": "4420", 00:46:42.405 "trtype": "TCP" 00:46:42.405 } 00:46:42.405 }, 00:46:42.405 { 00:46:42.405 "method": "bdev_nvme_set_hotplug", 00:46:42.405 "params": { 00:46:42.405 "enable": false, 00:46:42.405 "period_us": 100000 00:46:42.405 } 00:46:42.405 }, 00:46:42.405 { 00:46:42.405 "method": "bdev_wait_for_examine" 00:46:42.405 } 00:46:42.405 ] 00:46:42.405 }, 00:46:42.405 { 00:46:42.405 "subsystem": "nbd", 00:46:42.405 "config": [] 00:46:42.405 } 00:46:42.405 ] 00:46:42.405 }' 00:46:42.405 01:12:45 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:42.405 01:12:45 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:46:42.405 01:12:45 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:46:42.405 01:12:45 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:42.405 01:12:45 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:46:42.405 01:12:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:42.405 [2024-05-15 01:12:45.537251] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:46:42.405 [2024-05-15 01:12:45.537339] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118702 ] 00:46:42.405 [2024-05-15 01:12:45.677051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:42.661 [2024-05-15 01:12:45.771600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:42.661 [2024-05-15 01:12:45.943447] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:43.226 01:12:46 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:46:43.226 01:12:46 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:46:43.226 01:12:46 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:46:43.226 01:12:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:43.226 01:12:46 keyring_file -- keyring/file.sh@120 -- # jq length 00:46:43.792 01:12:46 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:46:43.792 01:12:46 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:46:43.792 01:12:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:43.792 01:12:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:43.792 01:12:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:43.792 01:12:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:43.792 01:12:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:43.792 01:12:47 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:46:43.792 01:12:47 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:46:43.792 01:12:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:43.792 01:12:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:43.792 01:12:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:43.792 01:12:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:43.792 01:12:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:44.051 01:12:47 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:46:44.051 01:12:47 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:46:44.051 01:12:47 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:46:44.051 01:12:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:46:44.617 01:12:47 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:46:44.617 01:12:47 keyring_file -- keyring/file.sh@1 -- # cleanup 00:46:44.617 01:12:47 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.8ZwLjG7UUj /tmp/tmp.10lmWYVNww 00:46:44.617 01:12:47 keyring_file -- keyring/file.sh@20 -- # killprocess 118702 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 118702 ']' 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@951 -- # kill -0 118702 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@952 -- # uname 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 118702 00:46:44.617 killing process with pid 118702 00:46:44.617 Received shutdown signal, test time was about 1.000000 seconds 00:46:44.617 00:46:44.617 Latency(us) 00:46:44.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:44.617 =================================================================================================================== 00:46:44.617 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 118702' 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@966 -- # kill 118702 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@971 -- # wait 118702 00:46:44.617 01:12:47 keyring_file -- keyring/file.sh@21 -- # killprocess 118195 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 118195 ']' 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@951 -- # kill -0 118195 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@952 -- # uname 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 118195 00:46:44.617 killing process with pid 118195 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 118195' 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@966 -- # kill 118195 00:46:44.617 01:12:47 keyring_file -- common/autotest_common.sh@971 -- # wait 118195 00:46:44.617 [2024-05-15 01:12:47.845547] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:46:44.617 [2024-05-15 01:12:47.845675] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:46:45.183 00:46:45.183 real 0m16.604s 00:46:45.183 user 0m41.204s 00:46:45.183 sys 0m3.368s 00:46:45.183 01:12:48 keyring_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:46:45.183 01:12:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:45.183 ************************************ 00:46:45.183 END TEST keyring_file 00:46:45.183 ************************************ 00:46:45.183 01:12:48 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:46:45.183 01:12:48 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:46:45.183 01:12:48 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:46:45.183 01:12:48 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:46:45.183 01:12:48 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:46:45.183 01:12:48 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:46:45.183 01:12:48 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:46:45.183 01:12:48 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:46:45.183 01:12:48 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:46:45.183 01:12:48 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:46:45.183 01:12:48 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:46:45.183 01:12:48 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:46:45.183 01:12:48 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:46:45.183 01:12:48 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:46:45.183 01:12:48 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:46:45.183 01:12:48 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:46:45.183 01:12:48 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:46:45.183 01:12:48 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:46:45.183 01:12:48 -- common/autotest_common.sh@721 -- # xtrace_disable 00:46:45.183 01:12:48 -- common/autotest_common.sh@10 -- # set +x 00:46:45.183 01:12:48 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:46:45.183 01:12:48 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:46:45.183 01:12:48 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:46:45.183 01:12:48 -- common/autotest_common.sh@10 -- # set +x 00:46:47.085 INFO: APP EXITING 00:46:47.085 INFO: killing all VMs 00:46:47.085 INFO: killing vhost app 00:46:47.085 INFO: EXIT DONE 00:46:47.651 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:47.651 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:46:47.651 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:46:48.214 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:48.214 Cleaning 00:46:48.214 Removing: /var/run/dpdk/spdk0/config 00:46:48.214 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:46:48.214 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:46:48.214 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:46:48.214 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:46:48.214 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:46:48.214 Removing: /var/run/dpdk/spdk0/hugepage_info 00:46:48.214 Removing: /var/run/dpdk/spdk1/config 00:46:48.214 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:46:48.214 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:46:48.214 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:46:48.214 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:46:48.215 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:46:48.215 Removing: /var/run/dpdk/spdk1/hugepage_info 00:46:48.215 Removing: /var/run/dpdk/spdk2/config 00:46:48.215 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:46:48.215 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:46:48.215 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:46:48.215 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:46:48.215 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:46:48.215 Removing: /var/run/dpdk/spdk2/hugepage_info 00:46:48.215 Removing: /var/run/dpdk/spdk3/config 00:46:48.215 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:46:48.215 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:46:48.215 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:46:48.215 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:46:48.215 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:46:48.215 Removing: /var/run/dpdk/spdk3/hugepage_info 00:46:48.215 Removing: /var/run/dpdk/spdk4/config 00:46:48.215 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:46:48.215 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:46:48.215 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:46:48.215 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:46:48.215 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:46:48.472 Removing: /var/run/dpdk/spdk4/hugepage_info 00:46:48.472 Removing: /dev/shm/nvmf_trace.0 00:46:48.472 Removing: /dev/shm/spdk_tgt_trace.pid73061 00:46:48.472 Removing: /var/run/dpdk/spdk0 00:46:48.472 Removing: /var/run/dpdk/spdk1 00:46:48.472 Removing: /var/run/dpdk/spdk2 00:46:48.472 Removing: /var/run/dpdk/spdk3 00:46:48.472 Removing: /var/run/dpdk/spdk4 00:46:48.472 Removing: /var/run/dpdk/spdk_pid100114 00:46:48.472 Removing: /var/run/dpdk/spdk_pid100237 00:46:48.472 Removing: /var/run/dpdk/spdk_pid100482 00:46:48.472 Removing: /var/run/dpdk/spdk_pid100607 00:46:48.472 Removing: /var/run/dpdk/spdk_pid100743 00:46:48.472 Removing: /var/run/dpdk/spdk_pid101081 00:46:48.472 Removing: /var/run/dpdk/spdk_pid101460 00:46:48.472 Removing: /var/run/dpdk/spdk_pid101467 00:46:48.472 Removing: /var/run/dpdk/spdk_pid103679 00:46:48.472 Removing: /var/run/dpdk/spdk_pid103974 00:46:48.472 Removing: /var/run/dpdk/spdk_pid104466 00:46:48.472 Removing: /var/run/dpdk/spdk_pid104472 00:46:48.472 Removing: /var/run/dpdk/spdk_pid104809 00:46:48.472 Removing: /var/run/dpdk/spdk_pid104823 00:46:48.472 Removing: /var/run/dpdk/spdk_pid104837 00:46:48.472 Removing: /var/run/dpdk/spdk_pid104868 00:46:48.472 Removing: /var/run/dpdk/spdk_pid104878 00:46:48.472 Removing: /var/run/dpdk/spdk_pid105023 00:46:48.472 Removing: /var/run/dpdk/spdk_pid105030 00:46:48.472 Removing: /var/run/dpdk/spdk_pid105134 00:46:48.472 Removing: /var/run/dpdk/spdk_pid105136 00:46:48.472 Removing: /var/run/dpdk/spdk_pid105239 00:46:48.472 Removing: /var/run/dpdk/spdk_pid105245 00:46:48.472 Removing: /var/run/dpdk/spdk_pid105658 00:46:48.472 Removing: /var/run/dpdk/spdk_pid105701 00:46:48.472 Removing: /var/run/dpdk/spdk_pid105780 00:46:48.472 Removing: /var/run/dpdk/spdk_pid105835 00:46:48.472 Removing: /var/run/dpdk/spdk_pid106169 00:46:48.472 Removing: /var/run/dpdk/spdk_pid106400 00:46:48.472 Removing: /var/run/dpdk/spdk_pid106887 00:46:48.472 Removing: /var/run/dpdk/spdk_pid107478 00:46:48.472 Removing: /var/run/dpdk/spdk_pid108819 00:46:48.472 Removing: /var/run/dpdk/spdk_pid109401 00:46:48.472 Removing: /var/run/dpdk/spdk_pid109409 00:46:48.472 Removing: /var/run/dpdk/spdk_pid111342 00:46:48.472 Removing: /var/run/dpdk/spdk_pid111427 00:46:48.472 Removing: /var/run/dpdk/spdk_pid111512 00:46:48.472 Removing: /var/run/dpdk/spdk_pid111607 00:46:48.472 Removing: /var/run/dpdk/spdk_pid111751 00:46:48.472 Removing: /var/run/dpdk/spdk_pid111836 00:46:48.472 Removing: /var/run/dpdk/spdk_pid111923 00:46:48.472 Removing: /var/run/dpdk/spdk_pid112008 00:46:48.472 Removing: /var/run/dpdk/spdk_pid112358 00:46:48.472 Removing: /var/run/dpdk/spdk_pid113039 00:46:48.472 Removing: /var/run/dpdk/spdk_pid114381 00:46:48.472 Removing: /var/run/dpdk/spdk_pid114581 00:46:48.472 Removing: /var/run/dpdk/spdk_pid114866 00:46:48.472 Removing: /var/run/dpdk/spdk_pid115162 00:46:48.472 Removing: /var/run/dpdk/spdk_pid115701 00:46:48.472 Removing: /var/run/dpdk/spdk_pid115710 00:46:48.472 Removing: /var/run/dpdk/spdk_pid116070 00:46:48.472 Removing: /var/run/dpdk/spdk_pid116225 00:46:48.472 Removing: /var/run/dpdk/spdk_pid116379 00:46:48.472 Removing: /var/run/dpdk/spdk_pid116472 00:46:48.472 Removing: /var/run/dpdk/spdk_pid116623 00:46:48.472 Removing: /var/run/dpdk/spdk_pid116732 00:46:48.473 Removing: /var/run/dpdk/spdk_pid117393 00:46:48.473 Removing: /var/run/dpdk/spdk_pid117423 00:46:48.473 Removing: /var/run/dpdk/spdk_pid117458 00:46:48.473 Removing: /var/run/dpdk/spdk_pid117706 00:46:48.473 Removing: /var/run/dpdk/spdk_pid117746 00:46:48.473 Removing: /var/run/dpdk/spdk_pid117778 00:46:48.473 Removing: /var/run/dpdk/spdk_pid118195 00:46:48.473 Removing: /var/run/dpdk/spdk_pid118230 00:46:48.473 Removing: /var/run/dpdk/spdk_pid118702 00:46:48.473 Removing: /var/run/dpdk/spdk_pid72905 00:46:48.473 Removing: /var/run/dpdk/spdk_pid73061 00:46:48.473 Removing: /var/run/dpdk/spdk_pid73323 00:46:48.473 Removing: /var/run/dpdk/spdk_pid73411 00:46:48.473 Removing: /var/run/dpdk/spdk_pid73456 00:46:48.473 Removing: /var/run/dpdk/spdk_pid73571 00:46:48.473 Removing: /var/run/dpdk/spdk_pid73601 00:46:48.473 Removing: /var/run/dpdk/spdk_pid73719 00:46:48.473 Removing: /var/run/dpdk/spdk_pid73999 00:46:48.473 Removing: /var/run/dpdk/spdk_pid74170 00:46:48.473 Removing: /var/run/dpdk/spdk_pid74249 00:46:48.473 Removing: /var/run/dpdk/spdk_pid74340 00:46:48.473 Removing: /var/run/dpdk/spdk_pid74435 00:46:48.730 Removing: /var/run/dpdk/spdk_pid74468 00:46:48.730 Removing: /var/run/dpdk/spdk_pid74504 00:46:48.730 Removing: /var/run/dpdk/spdk_pid74565 00:46:48.730 Removing: /var/run/dpdk/spdk_pid74684 00:46:48.730 Removing: /var/run/dpdk/spdk_pid75300 00:46:48.730 Removing: /var/run/dpdk/spdk_pid75359 00:46:48.730 Removing: /var/run/dpdk/spdk_pid75428 00:46:48.730 Removing: /var/run/dpdk/spdk_pid75456 00:46:48.730 Removing: /var/run/dpdk/spdk_pid75535 00:46:48.730 Removing: /var/run/dpdk/spdk_pid75563 00:46:48.730 Removing: /var/run/dpdk/spdk_pid75642 00:46:48.730 Removing: /var/run/dpdk/spdk_pid75670 00:46:48.730 Removing: /var/run/dpdk/spdk_pid75727 00:46:48.730 Removing: /var/run/dpdk/spdk_pid75757 00:46:48.730 Removing: /var/run/dpdk/spdk_pid75803 00:46:48.730 Removing: /var/run/dpdk/spdk_pid75833 00:46:48.730 Removing: /var/run/dpdk/spdk_pid75980 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76015 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76090 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76159 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76184 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76243 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76277 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76317 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76346 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76386 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76415 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76455 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76484 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76524 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76553 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76592 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76622 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76657 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76691 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76726 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76760 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76795 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76832 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76870 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76903 00:46:48.730 Removing: /var/run/dpdk/spdk_pid76940 00:46:48.730 Removing: /var/run/dpdk/spdk_pid77004 00:46:48.730 Removing: /var/run/dpdk/spdk_pid77115 00:46:48.730 Removing: /var/run/dpdk/spdk_pid77528 00:46:48.730 Removing: /var/run/dpdk/spdk_pid84267 00:46:48.730 Removing: /var/run/dpdk/spdk_pid84595 00:46:48.730 Removing: /var/run/dpdk/spdk_pid87000 00:46:48.730 Removing: /var/run/dpdk/spdk_pid87377 00:46:48.730 Removing: /var/run/dpdk/spdk_pid87638 00:46:48.730 Removing: /var/run/dpdk/spdk_pid87688 00:46:48.730 Removing: /var/run/dpdk/spdk_pid88559 00:46:48.730 Removing: /var/run/dpdk/spdk_pid88605 00:46:48.730 Removing: /var/run/dpdk/spdk_pid88970 00:46:48.730 Removing: /var/run/dpdk/spdk_pid89508 00:46:48.730 Removing: /var/run/dpdk/spdk_pid89948 00:46:48.730 Removing: /var/run/dpdk/spdk_pid90902 00:46:48.730 Removing: /var/run/dpdk/spdk_pid91864 00:46:48.730 Removing: /var/run/dpdk/spdk_pid91981 00:46:48.730 Removing: /var/run/dpdk/spdk_pid92049 00:46:48.730 Removing: /var/run/dpdk/spdk_pid93495 00:46:48.730 Removing: /var/run/dpdk/spdk_pid93721 00:46:48.730 Removing: /var/run/dpdk/spdk_pid98725 00:46:48.730 Removing: /var/run/dpdk/spdk_pid99165 00:46:48.730 Removing: /var/run/dpdk/spdk_pid99269 00:46:48.730 Removing: /var/run/dpdk/spdk_pid99407 00:46:48.730 Removing: /var/run/dpdk/spdk_pid99449 00:46:48.730 Removing: /var/run/dpdk/spdk_pid99495 00:46:48.730 Removing: /var/run/dpdk/spdk_pid99535 00:46:48.730 Removing: /var/run/dpdk/spdk_pid99701 00:46:48.730 Removing: /var/run/dpdk/spdk_pid99850 00:46:48.730 Clean 00:46:48.989 01:12:52 -- common/autotest_common.sh@1448 -- # return 0 00:46:48.989 01:12:52 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:46:48.989 01:12:52 -- common/autotest_common.sh@727 -- # xtrace_disable 00:46:48.989 01:12:52 -- common/autotest_common.sh@10 -- # set +x 00:46:48.989 01:12:52 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:46:48.989 01:12:52 -- common/autotest_common.sh@727 -- # xtrace_disable 00:46:48.989 01:12:52 -- common/autotest_common.sh@10 -- # set +x 00:46:48.989 01:12:52 -- spdk/autotest.sh@383 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:46:48.989 01:12:52 -- spdk/autotest.sh@385 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:46:48.989 01:12:52 -- spdk/autotest.sh@385 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:46:48.989 01:12:52 -- spdk/autotest.sh@387 -- # hash lcov 00:46:48.989 01:12:52 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:46:48.989 01:12:52 -- spdk/autotest.sh@389 -- # hostname 00:46:48.989 01:12:52 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:46:49.247 geninfo: WARNING: invalid characters removed from testname! 00:47:15.830 01:13:17 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:18.363 01:13:21 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:20.893 01:13:23 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:23.423 01:13:26 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:25.957 01:13:29 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:28.501 01:13:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:31.779 01:13:34 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:31.779 01:13:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:31.779 01:13:34 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:47:31.779 01:13:34 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:31.779 01:13:34 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:31.779 01:13:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:31.779 01:13:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:31.779 01:13:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:31.779 01:13:34 -- paths/export.sh@5 -- $ export PATH 00:47:31.779 01:13:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:31.779 01:13:34 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:47:31.779 01:13:34 -- common/autobuild_common.sh@437 -- $ date +%s 00:47:31.779 01:13:34 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715735614.XXXXXX 00:47:31.779 01:13:34 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715735614.ibCciJ 00:47:31.779 01:13:34 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:47:31.779 01:13:34 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:47:31.779 01:13:34 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:47:31.779 01:13:34 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:47:31.779 01:13:34 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:47:31.779 01:13:34 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:47:31.779 01:13:34 -- common/autobuild_common.sh@453 -- $ get_config_params 00:47:31.779 01:13:34 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:47:31.779 01:13:34 -- common/autotest_common.sh@10 -- $ set +x 00:47:31.779 01:13:34 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:47:31.779 01:13:34 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:47:31.779 01:13:34 -- pm/common@17 -- $ local monitor 00:47:31.779 01:13:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:31.779 01:13:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:31.779 01:13:34 -- pm/common@25 -- $ sleep 1 00:47:31.779 01:13:34 -- pm/common@21 -- $ date +%s 00:47:31.779 01:13:34 -- pm/common@21 -- $ date +%s 00:47:31.779 01:13:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715735614 00:47:31.779 01:13:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715735614 00:47:31.779 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715735614_collect-vmstat.pm.log 00:47:31.779 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715735614_collect-cpu-load.pm.log 00:47:32.713 01:13:35 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:47:32.713 01:13:35 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:47:32.713 01:13:35 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:47:32.713 01:13:35 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:47:32.713 01:13:35 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:47:32.713 01:13:35 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:47:32.713 01:13:35 -- spdk/autopackage.sh@19 -- $ timing_finish 00:47:32.713 01:13:35 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:32.713 01:13:35 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:47:32.713 01:13:35 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:47:32.713 01:13:35 -- spdk/autopackage.sh@20 -- $ exit 0 00:47:32.713 01:13:35 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:47:32.713 01:13:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:47:32.713 01:13:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:47:32.713 01:13:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:32.713 01:13:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:47:32.713 01:13:35 -- pm/common@44 -- $ pid=120372 00:47:32.713 01:13:35 -- pm/common@50 -- $ kill -TERM 120372 00:47:32.713 01:13:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:32.713 01:13:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:47:32.713 01:13:35 -- pm/common@44 -- $ pid=120373 00:47:32.713 01:13:35 -- pm/common@50 -- $ kill -TERM 120373 00:47:32.713 + [[ -n 5833 ]] 00:47:32.713 + sudo kill 5833 00:47:32.724 [Pipeline] } 00:47:32.742 [Pipeline] // timeout 00:47:32.747 [Pipeline] } 00:47:32.764 [Pipeline] // stage 00:47:32.769 [Pipeline] } 00:47:32.785 [Pipeline] // catchError 00:47:32.793 [Pipeline] stage 00:47:32.795 [Pipeline] { (Stop VM) 00:47:32.808 [Pipeline] sh 00:47:33.085 + vagrant halt 00:47:36.371 ==> default: Halting domain... 00:47:42.952 [Pipeline] sh 00:47:43.229 + vagrant destroy -f 00:47:47.413 ==> default: Removing domain... 00:47:47.425 [Pipeline] sh 00:47:47.703 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:47:47.711 [Pipeline] } 00:47:47.728 [Pipeline] // stage 00:47:47.733 [Pipeline] } 00:47:47.750 [Pipeline] // dir 00:47:47.757 [Pipeline] } 00:47:47.774 [Pipeline] // wrap 00:47:47.781 [Pipeline] } 00:47:47.795 [Pipeline] // catchError 00:47:47.803 [Pipeline] stage 00:47:47.805 [Pipeline] { (Epilogue) 00:47:47.818 [Pipeline] sh 00:47:48.096 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:47:54.668 [Pipeline] catchError 00:47:54.670 [Pipeline] { 00:47:54.687 [Pipeline] sh 00:47:54.966 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:47:54.966 Artifacts sizes are good 00:47:54.975 [Pipeline] } 00:47:54.991 [Pipeline] // catchError 00:47:55.001 [Pipeline] archiveArtifacts 00:47:55.007 Archiving artifacts 00:47:55.222 [Pipeline] cleanWs 00:47:55.232 [WS-CLEANUP] Deleting project workspace... 00:47:55.232 [WS-CLEANUP] Deferred wipeout is used... 00:47:55.237 [WS-CLEANUP] done 00:47:55.239 [Pipeline] } 00:47:55.255 [Pipeline] // stage 00:47:55.258 [Pipeline] } 00:47:55.270 [Pipeline] // node 00:47:55.274 [Pipeline] End of Pipeline 00:47:55.293 Finished: SUCCESS